The future of workplace, life and business, June 30

A week of travels and many long-form podcasts later I return here to reflect on the world through technology lens. This time the focus is on regulation, future computing and models.

Midjourney prompt: /imagine prompt: AI foundation models providers increasing transparency in fear of EU regulation:: –v 4

To increase transparency, foundation models providers should clearly communicate the limitations of their models and disclose the sources of training data. They should document the training processes and enable external audits to ensure fairness and accountability. Additionally, engaging in open research, addressing user concerns, and providing interpretability tools can further enhance transparency and foster responsible use of foundation models.

OpenAI prompt: what should foundation models providers do to increase transparency?

The US current administration is grappling with crafting a policy that suitably distinguished AI tech designed for consumer use from that for reconnoissance and offensive purposes by Chinese companies. Considering how modular and API driven the solutions functionality can be, it’s not an easy task. Also bear in mind that some 30 years ago the tech was first tested and applied by military, then government sector, then large enterprises, followed by SME and eventually consumers. Today, the consumers jump the gun first, followed by SME sector and then the rest. Innovation speed and cadence has become such that large organisations find it increasingly hard to embed new tech to their processes and working practices.

Stanford University CRFM that assesses AI foundation models has taken a look at the major market players in light of the proposed EU AI Act. They find that incumbents score poorly on the current areas of concern and stress for need to enact the Act. The findings show sharp divide between expected and actual behaviour of foundation model providers when it comes to closed models. The research recommends establishment of common standards. When talking about disclosures, I especially like the following statement “While progress in each of these areas requires some work, in many cases we believe this work is minimal relative to building and providing the foundation model and should be seen as a prerequisite for being a responsible and reputable model provider.” In short – document as you build your models and have courage to disclose what the black box does. Read the full report here – it’s enlightening 10 minutes.

The EU has launched a project to build four AI test centres to “which are virtual and physical, will from next year give technology providers a space to test AI and robotics in real-life settings within manufacturing, health care, agriculture and food, and cities.” according to Bloomberg. These should go hand in hand with and support the recent AI Act. Let’s hope, the developers are eagerly going to use these facilities.

Google is desperate not to lose its users and gain more foothold in the AI race. They recently released new secure AI framework aimed a the business customers. The principles listed are nothing new and many organisations already apply them. The AI space is no different, says Google. Either way it’s good to remind and reflect if what we do is helping us towards safer future. In other parts of Google, it has developed third party integrations within its Docs Suite (part of the “smart canvas” work) for both business and private users. The Verge has taken a peek at UX. The search giant also claims its AI chatbot, Bard, has improved its coding and maths capabilities. Some of the staff has labelled the hallucinating algorithm useless, but that won’t affect its march forward. This just makes me wonder when we get to the point where we trust our AI companions to write itself new functionality and validate it. Fun experiment when conducted in the lab environment, but tad scary if done live with access to the code repos and ability to commit.

With so much anticipation around the ESG regulation and need to increase transparency of associated topics I wonder how the foundation models providers are doing? You can measure when you know what matters and when you collect relevant data. Google, Microsoft, Amazon, Oracle, IBM and others are actively putting an effort into measuring their impact and taking action on it. I like Microsoft report as it’s built on the Sustainability Manager. What bothers a little is that smaller foundation models providers do not focus on such resource drags as measuring and reporting on their impact. Or if they are, the data is well hidden on the websites. From speaking to customers and tech providers I get the sense of urgency from larger organisations to start collecting and reporting on their performance and environmental impact. They start to understand the value of it which is not just to comply but also improve trust and business performance.

Intel has entered the quantum computing race focussing on its current manufacturing capabilities and aiming to replicate past success with silicon chips. Their competitors, however are doubtful of the chosen path and stick to theirs. The more approaches, the merrier – a single solution is never the best choice at the beginning and it doesn’t support innovation. The commercial mass availability is 5+ years away, but it’ll take us a huge step closer to AGI. Cnet covers the story here.

If you believe that Meta has been lagging behind in the AI race, think again. Even better, listen to Lex Friedman’s interview with Mark Zuckerberg and then look at the recent announcement on text to voice conversion technology – Voicebox AI. They are moving extremely fast, even so that the model won’t be released to the public in fear of misuse. At the same time, Meta has published an overview in the form of 22 system cards, detailing how Facebook and Instagram provide content to the user. The Verge provides and overview.

Patterns are everywhere, and so are the frameworks to help us make our professional lives easier. Christopher Alexander described it first in the 1970’s for architecture, and the software developers quickly saw the value in it. Here’s a brief recap of the design patterns.

When the investors were piling funding into crypto, the cautious voices asked about the value proposition (and so did I). When comparing the money funnelled into AI and LLM race, everyone wants to again fit on the boat, to be the first to invest. Inflection AI is the next darling that has no problem raising as much as they deem necessary to develop the personal AI – Pi. In Mustafa Suleyman’s (who used to run DeepMind) own words “… it’s a historic growth opportunity. Inflection’s best bet is to “blitz-scale” and raise funding voraciously to grow as fast as possible, risks be damned.”

As English is deemed as the most popular programming language, I wonder when support is extended to other major languages? It’s not easy, but the potential for me to interact with and direct AI in my native language would enable immense growth opportunities for many. I recognise, it could leave more behind and progress would probably happen in waves – tinkerers and small entrepreneurs discover something new and start using it, and then it gets turned into a platform service, available for all at a fixed cost.

Where the big tech is more careful and places guardrails around their AI-powered tools and platforms, small developers may choose not to do that. Lack of resources and pressure to avoid reputation affecting hickups are often driving their decisions on how the tools behave. NY Times article covers this topic. Some creators also cite personal responsibility and their preference for an answer and unwillingness to argue with the AI tool. If we accept the view of Eric Hartford, a creator of WizardLM-Uncensored, we’ll just build more echo chambers and division between different groups. “Democrats deserve their model. Republicans deserve their model. Christians deserve their model. Muslims deserve their model,” Mr. Hartford wrote. “Every demographic and interest group deserves their model. Open source is about letting people choose.”. I can’t agree with the suggestion that generating fake news is OK, but distributing it isn’t. If we chose not to distribute the content, it would not be generated. There would be simply no commercial value in it. To use such tools for education would be fine, however that’s not the goal of their creators.

I touched on Bing AI integration a few weeks ago. Now Microsoft Bing will build you a shopping guide when you ask it! That’s pretty amazing and I’d be very interested on its impact on the review and guide cottage-industries. Whilst many are already auto-generated low-quality gibberish, the others are very good. Notably the ones behind the paywall. Now, how will sites like Rtings and others reviewing gear use the chatbot capabilities to make their reviews so good, people are willing to pay for them?

And the last item covers rental and tenant assessment software, that is widely used in the US. According to the Lever article, it’s as biased black box a many other AI powered solutions. With more reliance on data and trust of its quality there are likely more groups that will be treated unfairly. Yet I don’t think the progress can or should be reversed – the software developers need to reduce biases in their tools. The users, who need to make decisions quickly, will often look for a single score, and then move on to the next case.

etEesti