Future of tech, workplace and us in news – May 28

Below is a collection of stories, articles and views from the last week that shift and nudge my thinking and views on AI, future of work and tech.

“We want AI systems to be accurate, reliable, safe and non-discriminatory, regardless of their origin,” European Commission President Ursula von der Leyen said on Friday. The G7 leaders mentioned generative AI, the subset popularised by the ChatGPT app, saying they “need to immediately take stock of the opportunities and challenges of generative AI.” All this is a reflection of sense of urgency to balance the emerging tech with societal safety.

AI is gaining ability to communicate with the humans through language at the pace not foreseen even by its creators. Storytelling is the key to getting us, humans, to behave in a way we do. Gods, money and many other items are not biological; these phenomena are created by us and only hold their value through our belief in them. The AI tools will fight for our intimacy… In order to manipulate people’s behaviour, there is no need to modify them physically (e.g. insert a chip into their brains) – people’s perception and subsequent acts have been altered by language for thousands of years.

I’m paraphrasing Yuval Noah Harari’s thoughts here. I highly advise to watch his recent lecture at the Frontiers below.

When I spoke to my 12 year old son about the AI and related themes, I noticed he used term ‘they’. When I asked, he corrected it to ‘it’ stating, “it clearly isn’t he or she”. How do you think?

Trust but verify? Do I still have to verify if the platform provider has done it? Yes, because they verify the tweeter not their content. It’s still your job to think critically before shouting to everyone “OMG, look what’s happening!!!”. What’s happening is that you were tricked into believing something that didn’t happen. Fake it till you make it would be the new slogan of misinformation campaigners.

Meta’s researchers used Bible in spoken and written form to teach their open-source AI model to recognise a ton of languages. But, low and behold, the source is stuffed with ancient bias and may produce all sorts of output. So, move in the right direction for preserving small languages, and it needs more work.

OpenAI leadership is calling for an international oversight body to stem the sector. However, as I’ve written before, the race to the bottom hasn’t slowed down. From the tone it feels as “we need to slow down, but before we do it, we need to win!”. Yurval Harari notes in his speech that collectively putting the foot on the brakes in the western world will not result in China or any other counterforce suddenly gaining upper hand. If they had the capabilities required to succeed, it would have already happened. Pausing to design and install an oversight body now would not risk anything for the wider society. It’s a bit like post Cold War when the US had suddenly lost its counterbalance (aka the enemy) and its politicians are desperate to find a new one. I believe the race isn’t between the players from the east and west. It’s a case of the US domestic conflict where AI leaders all want to win the race. But what waits at the other end? So perhaps the political elite should look at their donors and decide what is important in the long term – stability of the nation or their position.

A recent study showed that adversarial neural networks (ANN’s) learn similarly to human brain. If we could only make these models less power hungry (i.e. raise efficiency) and push the computing to the edge, reducing redundancy on central components, we’d be over the hill with this one. Such step would enable applying machine learning wherever the it is, and conditional awareness would raise its ability to respond fast without prior knowledge of the environment. Oh wait, is that a good thing? Or is it a bit like Terminator?

Now, how do you counter a machine that knows everything and can memorise more than you ever could? You could train your memory to learn everything you need to know, or you could simply train it to know where the tools and resources are. Either way, memory training is good.

Here’s an interesting and promising development for anyone left paralysed and unable to walk – a brain-spine interface that translates intentions to electrical signals bypassing the damaged areas. Reuters meanwhile covered Neuralink FDA approval story – not too dissimilar, but with a wider long term implication. Positive or negative – we’ll see.

Have you heard of functional music? You know, the playlists that help you focus on a specific activity like being sharp or winding down. Endel is a startup who has managed to schmooze Universal Music to partner with it in order to capture the booming market. Win for the listeners and Endel as auto-generated music will be streamed on know platforms. I’d like to know how will they deal with the plagiarism question – was it an inspiration or a copy?

Meanwhile, Spotify has been working on simplifying its advertisement business – the next time you hear your favourite podcast host reading an ad, it may not be them any more. Give it a text, voice model, sentiment and voila!

How would you feel about the World ID, a concept of an identity based on your eye iris scan? That would be your way to maintain personal privacy while proving their humanness in an economy disrupted by AI and automation, as stated by Alex Blania, the cofounder and CEO of Tools for Humanity. TFH is a spinout of Worldcoin that Sam Altman started a few years ago. Investors are feeling bullish and pouring $115m into the project. I’d suggest reading the privacy notice and would like to see independent third party validation of those claims.

Multilingual LLM’s are seen as giving a leg up for social media platform owners, requiring fewer humans to moderate the content created in multiple languages. However good these LLM’s are, context awareness (or knowing your territory and where you stand) still matters. There are , as covered in this Wired article, a few issues we are going to face for the foreseeable future. These are:

  • focus on large languages
  • availability of training material for minor languages or dialects
  • definition of what is harmful
  • platform owners unwillingness to share how their models work

The companies should ditch the ‘rest of the world problem’ approach to shift their products towards being used more for good than ill.

This is really positive development in identifying and taking predefined action on the hateful content, bot in images and text. Kudos to Microsoft for developing such toolset. Yet, the time will tell how affective it is. Hope for the best!

Listen to Nilay Patel talking to Kevin Scott, a Microsoft CTO for AI. Some takeaways, but not all – spend that hour, it’s worth it.

  • Co-pilot creation – Microsoft doesn’t have the knowledge of the business users to build tools that help *this role in that sector*, but that person has. Giving them the ability to compose the co-pilot is an interesting development. As Microsoft owns the ecosystem, how will they share the additional revenue gained from AI co-pilot developments?
  • I really like the idea of media provenance system – put an invisible cryptographic watermark and manifest into the files showing the receiver where it originated from. This could be a boost to digital art and another hit at pirated content.
  • Not entirely clear in Microsoft position on compensation to the creative industry whose output is used to train the AI engines.
  • What is a definition of a good platform? Microsoft wants to encourage people to build assistive tools. Open platform doesn’t mean full access to the underlying tech but ability to build your stuff via API’s. What would you build when the unit economics enable you to start as price and quality leader, and then develop your revenue stream? Without burning some states pension fund. Would you focus on the tech or using the platform?
  • Common and separate objectives with Microsoft and Open AI, oversight boards and partnerships, and much more.
etEesti