The future of workplace, life and business, June 30

A week of travels and many long-form podcasts later I return here to reflect on the world through technology lens. This time the focus is on regulation, future computing and models.

Midjourney prompt: /imagine prompt: AI foundation models providers increasing transparency in fear of EU regulation:: –v 4

To increase transparency, foundation models providers should clearly communicate the limitations of their models and disclose the sources of training data. They should document the training processes and enable external audits to ensure fairness and accountability. Additionally, engaging in open research, addressing user concerns, and providing interpretability tools can further enhance transparency and foster responsible use of foundation models.

OpenAI prompt: what should foundation models providers do to increase transparency?

The US current administration is grappling with crafting a policy that suitably distinguished AI tech designed for consumer use from that for reconnoissance and offensive purposes by Chinese companies. Considering how modular and API driven the solutions functionality can be, it’s not an easy task. Also bear in mind that some 30 years ago the tech was first tested and applied by military, then government sector, then large enterprises, followed by SME and eventually consumers. Today, the consumers jump the gun first, followed by SME sector and then the rest. Innovation speed and cadence has become such that large organisations find it increasingly hard to embed new tech to their processes and working practices.

Stanford University CRFM that assesses AI foundation models has taken a look at the major market players in light of the proposed EU AI Act. They find that incumbents score poorly on the current areas of concern and stress for need to enact the Act. The findings show sharp divide between expected and actual behaviour of foundation model providers when it comes to closed models. The research recommends establishment of common standards. When talking about disclosures, I especially like the following statement “While progress in each of these areas requires some work, in many cases we believe this work is minimal relative to building and providing the foundation model and should be seen as a prerequisite for being a responsible and reputable model provider.” In short – document as you build your models and have courage to disclose what the black box does. Read the full report here – it’s enlightening 10 minutes.

The EU has launched a project to build four AI test centres to “which are virtual and physical, will from next year give technology providers a space to test AI and robotics in real-life settings within manufacturing, health care, agriculture and food, and cities.” according to Bloomberg. These should go hand in hand with and support the recent AI Act. Let’s hope, the developers are eagerly going to use these facilities.

Google is desperate not to lose its users and gain more foothold in the AI race. They recently released new secure AI framework aimed a the business customers. The principles listed are nothing new and many organisations already apply them. The AI space is no different, says Google. Either way it’s good to remind and reflect if what we do is helping us towards safer future. In other parts of Google, it has developed third party integrations within its Docs Suite (part of the “smart canvas” work) for both business and private users. The Verge has taken a peek at UX. The search giant also claims its AI chatbot, Bard, has improved its coding and maths capabilities. Some of the staff has labelled the hallucinating algorithm useless, but that won’t affect its march forward. This just makes me wonder when we get to the point where we trust our AI companions to write itself new functionality and validate it. Fun experiment when conducted in the lab environment, but tad scary if done live with access to the code repos and ability to commit.

With so much anticipation around the ESG regulation and need to increase transparency of associated topics I wonder how the foundation models providers are doing? You can measure when you know what matters and when you collect relevant data. Google, Microsoft, Amazon, Oracle, IBM and others are actively putting an effort into measuring their impact and taking action on it. I like Microsoft report as it’s built on the Sustainability Manager. What bothers a little is that smaller foundation models providers do not focus on such resource drags as measuring and reporting on their impact. Or if they are, the data is well hidden on the websites. From speaking to customers and tech providers I get the sense of urgency from larger organisations to start collecting and reporting on their performance and environmental impact. They start to understand the value of it which is not just to comply but also improve trust and business performance.

Intel has entered the quantum computing race focussing on its current manufacturing capabilities and aiming to replicate past success with silicon chips. Their competitors, however are doubtful of the chosen path and stick to theirs. The more approaches, the merrier – a single solution is never the best choice at the beginning and it doesn’t support innovation. The commercial mass availability is 5+ years away, but it’ll take us a huge step closer to AGI. Cnet covers the story here.

If you believe that Meta has been lagging behind in the AI race, think again. Even better, listen to Lex Friedman’s interview with Mark Zuckerberg and then look at the recent announcement on text to voice conversion technology – Voicebox AI. They are moving extremely fast, even so that the model won’t be released to the public in fear of misuse. At the same time, Meta has published an overview in the form of 22 system cards, detailing how Facebook and Instagram provide content to the user. The Verge provides and overview.

Patterns are everywhere, and so are the frameworks to help us make our professional lives easier. Christopher Alexander described it first in the 1970’s for architecture, and the software developers quickly saw the value in it. Here’s a brief recap of the design patterns.

When the investors were piling funding into crypto, the cautious voices asked about the value proposition (and so did I). When comparing the money funnelled into AI and LLM race, everyone wants to again fit on the boat, to be the first to invest. Inflection AI is the next darling that has no problem raising as much as they deem necessary to develop the personal AI – Pi. In Mustafa Suleyman’s (who used to run DeepMind) own words “… it’s a historic growth opportunity. Inflection’s best bet is to “blitz-scale” and raise funding voraciously to grow as fast as possible, risks be damned.”

As English is deemed as the most popular programming language, I wonder when support is extended to other major languages? It’s not easy, but the potential for me to interact with and direct AI in my native language would enable immense growth opportunities for many. I recognise, it could leave more behind and progress would probably happen in waves – tinkerers and small entrepreneurs discover something new and start using it, and then it gets turned into a platform service, available for all at a fixed cost.

Where the big tech is more careful and places guardrails around their AI-powered tools and platforms, small developers may choose not to do that. Lack of resources and pressure to avoid reputation affecting hickups are often driving their decisions on how the tools behave. NY Times article covers this topic. Some creators also cite personal responsibility and their preference for an answer and unwillingness to argue with the AI tool. If we accept the view of Eric Hartford, a creator of WizardLM-Uncensored, we’ll just build more echo chambers and division between different groups. “Democrats deserve their model. Republicans deserve their model. Christians deserve their model. Muslims deserve their model,” Mr. Hartford wrote. “Every demographic and interest group deserves their model. Open source is about letting people choose.”. I can’t agree with the suggestion that generating fake news is OK, but distributing it isn’t. If we chose not to distribute the content, it would not be generated. There would be simply no commercial value in it. To use such tools for education would be fine, however that’s not the goal of their creators.

I touched on Bing AI integration a few weeks ago. Now Microsoft Bing will build you a shopping guide when you ask it! That’s pretty amazing and I’d be very interested on its impact on the review and guide cottage-industries. Whilst many are already auto-generated low-quality gibberish, the others are very good. Notably the ones behind the paywall. Now, how will sites like Rtings and others reviewing gear use the chatbot capabilities to make their reviews so good, people are willing to pay for them?

And the last item covers rental and tenant assessment software, that is widely used in the US. According to the Lever article, it’s as biased black box a many other AI powered solutions. With more reliance on data and trust of its quality there are likely more groups that will be treated unfairly. Yet I don’t think the progress can or should be reversed – the software developers need to reduce biases in their tools. The users, who need to make decisions quickly, will often look for a single score, and then move on to the next case.

#TED ten commandments for creative people and others

Reading through some older unread posts (this time from swiss-miss), I found this rather joyful collection of wisdom dating back to 2006 by Tim Longhurst. I hope I did not leave the author out, this is how far I managed to track the origins of this message.

Anyway, reading through these words of wisdom / advise / commandments, I realised this should be part of every curriculum for the higher education. Young people who are going to lead their lives need to know how to behave in public. Well, most don’t.

And if you do not know what TED is, go to their web site, then events and then start contributing.

Here we go…

The TED Commandments

Thou Shalt Not Simply Trot Out thy Usual Shtick
Pressure yourself to keep learning about a topic. I have an Information Overload talk I gave a few years ago, and wouldn’t dream of presenting it again without catching up on the latest theories and contributions to the debate.

Thou Shalt Dream a Great Dream, or Show Forth a Wondrous New Thing, Or Share Something Thou Hast Never Shared Before
Don’t be afraid to experiment with what you’ve already learned. Share not only what you know, but what you’d like it to be. Look at your processes, at what you do every day. If it works for you, it’s quite possible the process is a good one and could be shared, inviting discussion to make it even better.

Thou Shalt Reveal thy Curiosity and Thy Passion
It’s your topic, your audience. Own them. Your talk may be at a monthly department meeting or national conference, but most likely you’ve got a keen interest in the subject. Show it!

Perhaps you actually are passionate about the topic, even better. Share your excitement as well as your progress.

Thou Shalt Tell a Story
Involve your audience by giving them someone to empathize with and to make them care. The story might be about yourself or someone else, it doesn’t matter as long as it’s a good tell.

Thou Shalt Freely Comment on the Utterances of Other Speakers for the Sake of Blessed Connection and Exquisite Controversy
As you catch up, read and get involved on blogs by those you admire within the topic. Commenting on posts is a great way to become engaged with those who care about the same things you do. Also explore dissenting opinions, adding your own if you have them.

Thou Shalt Not Flaunt thine Ego. Be Thou Vulnerable. Speak of thy Failure as well as thy Success.
No one wants to hear about how wonderful you are because you figured this out, but the different methods you used to get the conclusion. If you’ve learned from your mistakes, someone else will, too.

Thou Shalt Not Sell from the Stage: Neither thy Company, thy Goods, thy Writings, nor thy Desparate need for Funding; Lest Thou be Cast Aside into Outer Darkness.
While I agree that I don’t usually want to hear a sales pitch, I’d take exception to this when appealing for library funding or for my job.

Thou Shalt Remember all the while: Laughter is Good.
I’ve heard different opinions on humor during talks, but I gave this advice to a staff member just the other day: If you’re going to use humor point it towards yourself. I use self-deprecating humor quite a bit, it seems to somehow relax the audience, especially when teaching technology. Also be careful of humor that may offend someone: I thought about writing this post as if I were Moses and God Himself had delivered the tablets and burning bushes were involved. I reconsidered…probably very wisely.

Thou Shalt Not Read thy Speech.
Worse yet, never turn your back to your audience to read slides. Then again, don’t put so much text on a slide that you’d have to read it at all! Text is for handouts.

Thou Shalt Not Steal the Time of Them that Follow Thee
Make your talk worthwhile with your passion and your knowledge. Give them one big thing to remember a week later, your chance of retention is better the less you try to put in their heads. Even though some training is repetitive in nature, get them excited, fired up and ready to go use what you’ve taught them!

en_GBEnglish (UK)