Future of tech, workplace and us in news – May 22

CBinsights has released Q1 ’23 AI funding report. There’s a notable drop compared to the previous period, but that’s expected considering overall belt-tightening in the tech sector. At the same time three generative AI companied raised enough dough to gain a unicorn status, and only one of them from the US! Overall, M&A deals are up and funding is sure to return to the 2022 level or surpass it by the end of the year. Money doesn’t like standing still…

“Money likes speed” painting in the Viirelaid Embassy in Tallinn.

Heard of Steven Levy’s Plaintext newsletter? If not, sign up for it. If not, then after reading his latest conversation output with Gary Marcus, the AI critic turned into even more of oneself lately. Marcus has an interesting idea of forming one International Agency for AI, a non-profit to guide and guard after the industry and nation states alike.

Caryn AI is a girlfriend for hire service, I mean, a digital twin of a Snapchat influencer designed to reduce loneliness. Or that’s what its creator states whilst hoping to pull in $5m a month at the engagement rate of $1 per minute. “CarynAI is a step in the right direction to allow my fans and supporters to get to know a version of me that will be their closest friend in a safe and encrypted environment,” Caryn Marjorie added. No, there’s no altruism in play here, pure capitalism. Sex sells.

Responsible AI is a theme that all major developers aim to invest in. After all, the trust or lack of it thereof, can change the users perception of a company and encourage them to look for alternatives. When an AI system recommends us more positive tone in messages, we are likely to receive more positive response. The technique is called latent persuasion. The same applies when the tone and messages of the chatbot are negative or biased (again, the bias may be by design). And biased they are, reflecting the values of the creators and validators. A study called Whose opinions do LLM’s really reflect? covers how we, the users of these systems, behave based on the tools we use. So our choice of tools will impact how we are perceived by others.

Who’s on the bus and who’s still trying to catch it? Ben Thompson covers Google I/O and related regulatory topics in his excellent Stratchery post.

Google has been in news with its Bard AI chatbot, but not so much with the work its been doing with pharmaceuticals industry attempting to cut the lengthy process of discovery/trial and time to market.

A subset of US voters are scared of the AI race. However, I have to agree with the words of founder of Anyscale, a UC Berkeley professor Ion Stoica “Americans may not realize how pervasive AI already is in their daily lives, both at home and at work”. Unknown raises fears, but are your congressmen any wiser than an average Joe on the potential benefits and threats the AI race can pose to your future? Ask them.

How very true! Corporate L&D often focuses on desired outcomes from the management, not from the people (those to be trained). Are we providing the most accurate skills training at the people who need it most at their time? Often we don’t. How to improve it?

New York Public Schools Chancellor has decided to remove ban on using chatGPT in NY public schools and instead start teaching the kids on the ethics of AI and opportunities it brings. MIT has celebrated Day of AI and created a starter curriculum for kids up to the age of 18 to get started with the topic. MIT’s RAISE programme looks good as a starting point. Have your kids school tried it yet?

Grammarly was chosen by many as their go-to tool for churning out readable coherent content. As tech giants are eating its lunch, Grammarly is desperate not to lose (paying) customers and claims it’s there for good. It feels that deep integration with Microsoft’s Azure infrastructure is a step towards showing off its product capabilities and eventually being acquired by MSFT. Agree with me?

etEesti