Worldcoin has been in tech news recently. Launched nearly a year ago, Worldcoin is one of many providers of digital identity at the age of AI and offers tokens for new sign-ups in exchange of, well, putting one’s faith in the identity proof technology and trusting that their personal data is well guarded. The shiny orb will scan the user eye iris and the user will get in addition to verified identity a gift. What’s wrong with that?
Quartz is reporting on criminals exploiting the vulnerable people hoping for easy payout and getting a fraction of it. In the process the also give up their identity for potential second tier fraud. Not a good deal, but absolutely predictable. Is it also avoidable? Vitalik provides some thoughts how it could be done in his post above.
Whilst I welcome such initiatives to get more people participating safely in a Web3 digital economy (which is still evolving and figuring out the business models), handing out tokens (that can be traded for money) in exchange to grow the user base may not be the best strategy. In this case the provider will soon discover that new users had different motives from theirs and the promise of verified personhood has little to do with those. Tying the identity to UBI will definitely be boon to tech providers and their supply chains as countries need many contact points to roll out the identity validation systems.
Regulators must not focus on just one company – Worldcoin – to be extensively scrutinised and others falling through the net. I would hope that regulators hire experts who understand both legal and technological aspects, and provide appropriate guidance, which in turn will be included in GDPR and other similar legislation. Equally I hope that in this case the decision makers will not give into the providers pressure to do something but nothing much to make everyone feel somewhat content. Our identity must remain unique and secure and current systems need a massive overhaul to even start to enable thinking of a unified, globally trusted identity. And it shouldn’t be the corporates that run it but rather a decentralised network validated by a UN type body.
A week of travels and many long-form podcasts later I return here to reflect on the world through technology lens. This time the focus is on regulation, future computing and models.
The US current administration is grappling with crafting a policy that suitably distinguished AI tech designed for consumer use from that for reconnoissance and offensive purposes by Chinese companies. Considering how modular and API driven the solutions functionality can be, it’s not an easy task. Also bear in mind that some 30 years ago the tech was first tested and applied by military, then government sector, then large enterprises, followed by SME and eventually consumers. Today, the consumers jump the gun first, followed by SME sector and then the rest. Innovation speed and cadence has become such that large organisations find it increasingly hard to embed new tech to their processes and working practices.
Stanford University CRFM that assesses AI foundation models has taken a look at the major market players in light of the proposed EU AI Act. They find that incumbents score poorly on the current areas of concern and stress for need to enact the Act. The findings show sharp divide between expected and actual behaviour of foundation model providers when it comes to closed models. The research recommends establishment of common standards. When talking about disclosures, I especially like the following statement “While progress in each of these areas requires some work, in many cases we believe this work is minimal relative to building and providing the foundation model and should be seen as a prerequisite for being a responsible and reputable model provider.” In short – document as you build your models and have courage to disclose what the black box does. Read the full report here – it’s enlightening 10 minutes.
The EU has launched a project to build four AI test centres to “which are virtual and physical, will from next year give technology providers a space to test AI and robotics in real-life settings within manufacturing, health care, agriculture and food, and cities.” according to Bloomberg. These should go hand in hand with and support the recent AI Act. Let’s hope, the developers are eagerly going to use these facilities.
Google is desperate not to lose its users and gain more foothold in the AI race. They recently released new secure AI framework aimed a the business customers. The principles listed are nothing new and many organisations already apply them. The AI space is no different, says Google. Either way it’s good to remind and reflect if what we do is helping us towards safer future. In other parts of Google, it has developed third party integrations within its Docs Suite (part of the “smart canvas” work) for both business and private users. The Verge has taken a peek at UX. The search giant also claims its AI chatbot, Bard, has improved its coding and maths capabilities. Some of the staff has labelled the hallucinating algorithm useless, but that won’t affect its march forward. This just makes me wonder when we get to the point where we trust our AI companions to write itself new functionality and validate it. Fun experiment when conducted in the lab environment, but tad scary if done live with access to the code repos and ability to commit.
With so much anticipation around the ESG regulation and need to increase transparency of associated topics I wonder how the foundation models providers are doing? You can measure when you know what matters and when you collect relevant data. Google, Microsoft, Amazon, Oracle, IBM and others are actively putting an effort into measuring their impact and taking action on it. I like Microsoft report as it’s built on the Sustainability Manager. What bothers a little is that smaller foundation models providers do not focus on such resource drags as measuring and reporting on their impact. Or if they are, the data is well hidden on the websites. From speaking to customers and tech providers I get the sense of urgency from larger organisations to start collecting and reporting on their performance and environmental impact. They start to understand the value of it which is not just to comply but also improve trust and business performance.
Intel has entered the quantum computing race focussing on its current manufacturing capabilities and aiming to replicate past success with silicon chips. Their competitors, however are doubtful of the chosen path and stick to theirs. The more approaches, the merrier – a single solution is never the best choice at the beginning and it doesn’t support innovation. The commercial mass availability is 5+ years away, but it’ll take us a huge step closer to AGI. Cnet covers the story here.
Patterns are everywhere, and so are the frameworks to help us make our professional lives easier. Christopher Alexander described it first in the 1970’s for architecture, and the software developers quickly saw the value in it. Here’s a brief recap of the design patterns.
When the investors were piling funding into crypto, the cautious voices asked about the value proposition (and so did I). When comparing the money funnelled into AI and LLM race, everyone wants to again fit on the boat, to be the first to invest. Inflection AI is the next darling that has no problem raising as much as they deem necessary to develop the personal AI – Pi. In Mustafa Suleyman’s (who used to run DeepMind) own words “… it’s a historic growth opportunity. Inflection’s best bet is to “blitz-scale” and raise funding voraciously to grow as fast as possible, risks be damned.”
As English is deemed as the most popular programming language, I wonder when support is extended to other major languages? It’s not easy, but the potential for me to interact with and direct AI in my native language would enable immense growth opportunities for many. I recognise, it could leave more behind and progress would probably happen in waves – tinkerers and small entrepreneurs discover something new and start using it, and then it gets turned into a platform service, available for all at a fixed cost.
Where the big tech is more careful and places guardrails around their AI-powered tools and platforms, small developers may choose not to do that. Lack of resources and pressure to avoid reputation affecting hickups are often driving their decisions on how the tools behave. NY Times article covers this topic. Some creators also cite personal responsibility and their preference for an answer and unwillingness to argue with the AI tool. If we accept the view of Eric Hartford, a creator of WizardLM-Uncensored, we’ll just build more echo chambers and division between different groups. “Democrats deserve their model. Republicans deserve their model. Christians deserve their model. Muslims deserve their model,” Mr. Hartford wrote. “Every demographic and interest group deserves their model. Open source is about letting people choose.”. I can’t agree with the suggestion that generating fake news is OK, but distributing it isn’t. If we chose not to distribute the content, it would not be generated. There would be simply no commercial value in it. To use such tools for education would be fine, however that’s not the goal of their creators.
I touched on Bing AI integration a few weeks ago. Now Microsoft Bing will build you a shopping guide when you ask it! That’s pretty amazing and I’d be very interested on its impact on the review and guide cottage-industries. Whilst many are already auto-generated low-quality gibberish, the others are very good. Notably the ones behind the paywall. Now, how will sites like Rtings and others reviewing gear use the chatbot capabilities to make their reviews so good, people are willing to pay for them?
And the last item covers rental and tenant assessment software, that is widely used in the US. According to theLever article, it’s as biased black box a many other AI powered solutions. With more reliance on data and trust of its quality there are likely more groups that will be treated unfairly. Yet I don’t think the progress can or should be reversed – the software developers need to reduce biases in their tools. The users, who need to make decisions quickly, will often look for a single score, and then move on to the next case.
When consulting, I am often asked about the IT demand, how to deal with it and how to decide what to take forward. I’ve covered the topic briefly in strategy section before. Let’s pick the topic a little.
The way IT demand is managed will depend on the established IT or delivery structure and relationship with the rest of the organisation and maturity of the processes. In short, what we do and how we do it.
It is useful to bear in mind what the business relationship management does in ITSM and value management context – identify, stimulate and shape strategic IT demand. It sets the boundaries of how the IT solutions are requested and delivered. The people in this role work across the organisation bridging the business demand with IT delivery capabilities and tracking the value delivery in the process.
I’ve worked in and with the organisations with clear processes for demand intake, triage, prioritisation, assessment, handover to delivery and later analysis of value produced. I’ve also worked for and with the organisations that have no formal arrangements in place and demand is dealt with on ad-hoc basis. The latter approach may work for small organisations and with low IT overall maturity. When the organisation grows and teams grow however, there’s need to put in place some frames. The easiest is to follow the ITIL/ITSM guidance and establish necessary roles and procedures.
I like to use HS2 as an example where we had no formal way of managing demand for IT. I’m generalising on purpose – there should be a single channel for the (internal) customers to interact with the provider (your IT unit). It doesn’t matter what type of IT product or service is requested, the service management tool or an IT process should make it easy for the customer to do so. We were tasked with establishing a process and sub-processes, necessary gates and such to ensure that:
internal customers can and will go through a single route for requesting new IT
there’s a framework for classifying demand along with investment requirements, reviews and steering groups
requests for new IT are tied to budgetary process
technology debt is effectively managed and…
value is measured
My team’s focus was on strategic demand. By interacting with the customers, however, it became clear that they may not know what they need and may be unable to understand the implications of new IT solutions. We also noted, that our colleagues may describe a desired solution, not the problem. To counter that, we borrowed a note from the UK Department of Transport IT:
“Describe a problem, not a solution.”
After some deliberation and consulting the service management team, we decided to take the next steps:
define the process, role and routines.
The process was initially very simple – capture the demand, assess and contact the requester to understand more.
Roles were defined as requester, assessor/analyst (IT Business Relationship Manager/Business Analyst), approver (the budget holder and service provider), delivery team (IT service management or project delivery)
establish a service request form in ServiceNow to capture all new demand. The form was dynamic based on the customer choices and sifted standard SR’s to service desk and the rest in to new IT demand list.
set up weekly demand review meetings for initial triage, demand correction and future customer education. Here we agreed who will be part of the initial assessment and expected outcomes.
With such an approach we were able to capture 95% of the new IT demand. By collating all the requests already shared with various parts of the IT, we managed to skim the list of 400 down to circa 120 (a lot were repeat requests and substitutes to existing services). By using ServiceNow routing and custom dynamic forms capabilities we were able to produce intelligent ‘interaction’ with the customer. They felt that they are not asked to describe the same thing over again, just clarify detail.
Once the standard ITSM process was in place, we could focus on strategic demand. For that we used a set of questions that formed basis for the conversation. An example would be here:
What is needed?
Why is it needed?
Who needs it?
Assumed cost and who pays for it?
What gets better?
When is it needed by?
Risks / opportunities?
Link to corp programme?
This was the initial assessment form and we deliberately chose not to add more information to it than needed for triage – will it go ahead or get rejected. That meant no more than 1-2 sentences per box.
As my team’s focus was on projects, we established a framework for financial and work time impact assessment. For this we set out the following criteria for assessing. Please note that not all may apply.
Up to €50k
€50k to €250k
€250k to €500k
€500k to €1m
€1m and above
Single directorate / unit
Single directorate / unit
Multiple directorates / partners
Org wide / partners
Org wide / partners
Up to 1 month
Up to 3 months
Up to 6 months
Up to 1 year
> 1 year
We anticipated demand to arrive later during the year and thus encouraged our colleagues to engage early and invite us to be part of their thought process. This approach served multiple purposes:
it enabled IT to have an early signal of business planning and to conduct internal assessments early, without commitment to delivery
it helped to build trust between people and to build up business expertise within the IT
it allowed IT to promote existing services and plan necessary changes to those.
The last element, we were asked to do was the hardest – understand anticipated value and be able to measure it. Working with our colleagues from value management team we included a set of goals for any project and measures to track their impact. Each project had to provide at least one of these:
improve customer experience
improve data quality
meet regulatory demand
reduce technology debt
For example, an initiative to introduce a CRM solution helped to improve customer experience, business processes and data quality while reducing technology debt and meet regulatory demand. In this project we transitioned from in-house bespoke system to cloud-based Dynamics CRM, applied data protection policies, designed easy routes for customers to engage with the team, trained people to use the system. We finally had a single source of truth and were able to effectively respond to FOI’s and data requests. All these were assumed benefits that had values to track against once in operation.
I recognise this blog covered more than just demand management, but the function is quite broad and for it to be valuable, not just a drag on people’s time, it needs to understand both business and IT, and be engaged with both through the demand process.
Below is my reflection of the AI related news from the past week. These are likely to be changes that’ll impact us in the long run.
Does your favourite LLM/chatbot/machine know the topic to the extent it will not attempt to wing it in order to please you with its response? It’s not new for AI models to hallucinate, but with convincing enough content we wish that away. A case for future legal professionals to ponder over.
How would you value your favourite podcast being autogenerated? What content will be it based on? And what about the time when the original content creators stop producing input? Is it just a fad or something that can and will be monetised? Would you care how the script and delivery were by non-human actors if all you are after, is information + conversation? Or would you care for connection to the presenters? Who says AI can’t actively engage with the audience replicating the human host? I believe the generative AI can’t yet be creative enough to shift the tone (pun attended!) and topic half-way. Would we even want to develop such capabilities? Wired covers the topic here.
Would the game creators determine the end result or just set loose boundaries of how certain external factors might play out? And then leave it to the models to provide the (ever changing) story that provides customised experience to every player? Am I hallucinating? Nvidia’s recent announcement on autogenerated content.
3000 US workers were asked about their attitude towards AI in the workplace. You might guess that older generations are more concerned than younger and less likely to adopt to the AI driven tools. As it turns out, the response is very similar across all age groups and probably depends more on the personal circumstances like job role, awareness and financial position. I’d like to see something similar done in other economic drivers in EU as well. Full report is available here.
Meanwhile, according to Washington Post, many organisations are considering replacing their copywriters with ChatGPT. And those writing copy, according to the article, are trying their best to dissuade their employers by focusing on the poor output of the generative AI tools. Would it not be more persuasive to give the employer a comparison of output and point out how they would use the new tools to enhance their work and reduce their customer’s service cost? Perhaps the people interviewed were unhappy with their current choice and considered shifting to non-digital occupations already? Either way, the AI-driven world is not going to slow down and many white-collar jobs will be lost for good. The governments should consider mandating lowering the cost of goods and services for businesses that are replacing their workforce by AI tools and not filling the positions. A bit invasive capitalism, I know. But the machines are disrupting the world as we know it at the rate not seen before.
On with the journalism. If you’ve heard of Artifact app, or using it, you’ll appreciate its slick clean lines and … otherwise it’s as any other modern news aggregator. The team has now built capacity for rewriting the clickbaity headlines using GPT4. The next step is to train the algorithm to recognise noise and alter it’s headline automagically. The problem? Well funded apps can provide good, free aggregated content. Everyone else wants to sell more ad space. I wonder how long will it take for others to follow the suit. And how little for the content mills (that are already using ChatGPT and other generative AI tools) to tweak their output to install multiple clickbaits into one story? Seems like an opportunity for all involved to win more consumer screen time.
Security is necessary, but oftentimes inconvenience. The response is… biometrics, isn’t it? This Babbage podcast episode from The Economist explores the opportunities and threats associated with generative AI and biometrics. A simple reminder – a secret (=password) is something you know, feature (=biometrics) something you have. The former can be changed at ease, and the impact of a lost password can in most cases be stopped or even reversed. Your biometric data is public – I can recognise your face and voice. When your biometric metadata becomes public though (knowledge what is looks and sounds like plus knowledge on how to easily replicate it), the sky becomes slightly cloudier – it’s very hard if not impossible to change your biometrics. Listen to the episode here, it’s rather thought provoking.
I have to agree with Andrew Ng on this one – AI should be seen as a solution, not the problem. However, it’s worthwhile setting the safe boundaries to avoid a large-scale mishaps should it connect to the critical infrastructure and start acting as a chaos monkey. Generating stories to match its suggestions is not too inhuman activity, so probably its creators have tweaked ChatGPT to find evidence where is none. You believe it? Well, we told you, they may be bonkers.
Ukrainian Diia app is starting to make waves. It was inspired by Estonian governmental systems experience, and now our mRiik is taking inspiration from it. When the state digital services are accessible to you when you need them regardless of your location, they will be used. Axios covers the Diia story.
Microsoft is trying to get you to use Teams for business and personal use alike. Detractors noise aside, ability to easily set up online communities is never a bad thing. Even better, if this comes sans near compulsory advertising stream, seen in every commercially minded ‘free’ platform. As Teams is baked into Windows 11 and many people are actively using it, I’d like Microsoft product teams agreeing on timeline for culling Skype and a tool for easily migrating the content history. This has so far been seen as too laborious and hence the steam has gone to push for Teams. Old habits die hard and many will find no compelling reasons to learn to use another messaging platform. I think Microsoft needs to rethink what are the customers getting? Integrated version of AI-supported Microsoft Designer may be it, but not necessarily. In other minor note, as Windows Copilot enters the user realm, Microsoft is quietly pushing Cortana out of it. Did you ever use it? And how was it compared to Siri and Alexa?
Technical debt and orphan code have long been challenges that have plagued CIOs. As more and more code is built, there’s naturally confusion that comes with understanding what certain code does and how it was created, he said. As developers leave companies, that confusion intensifies and as time goes on a growing pile of code becomes more and more difficult to keep up to date. I do expect these issues to be aggravated by generative AI coding tools.
WSJ “AI Is Writing Code Now. For Companies, That Is Good and Bad.”
More companies expect workers to return to the offices for at least three days a week. Meta is one of the tech giants to mandate that from September. One could wager that corporate real estate needs to be used, or that we are in general more effective and productive when we have the human connection, i.e. working together in a physical space. My take is that it really depends on the needs set by the role and individual circumstances. And preferences. What’s your take?
Amazon has started rolling out AI based tools to enhance its logistics operations. You could see it in two ways – either to raise customer satisfaction or reduce reliance on human workers. Or both. The statements are interesting to read. Jeremy Wyatt, director of applied science at Amazon Robotics, said of detecting a faulty item “That’s cognitively demanding because obviously you’re looking for something that’s rare and it’s not your primary job”. It might feel as an assumption on the warehouse worker not being the smartest. But their metrics are not set on quality, but on the throughput per hour. I don’t think we should design systems, where the humans do lower cognitive jobs than machines. IMHO the human should be collaborating with the machine working on their optimum mental capacity. Yes, in the future it means less jobs in logistics sector that feeds our collective desire for more stuff ASAP. This raises again a question for the governments – what are the effective policies shaping the future and preparing the workforce for it?
To wrap up, Telly is going to start shipping free 55″ 4K TV’s for those who signed up for the service. Yes, service, not a freebie. I’d love to understand the business model and ROI calculations a bit more. Extending these into hotels is also clever move, but then you’ll need to figure out how to partner with the hotels who currently hold the monopoly for the visitors free time at screen. Who knew, advertisers are willing to pay that much for the attention!
Below is a collection of stories, articles and views from the last week that shift and nudge my thinking and views on AI, future of work and tech.
“We want AI systems to be accurate, reliable, safe and non-discriminatory, regardless of their origin,” European Commission President Ursula von der Leyen said on Friday. The G7 leaders mentioned generative AI, the subset popularised by the ChatGPT app, saying they “need to immediately take stock of the opportunities and challenges of generative AI.” All this is a reflection of sense of urgency to balance the emerging tech with societal safety.
AI is gaining ability to communicate with the humans through language at the pace not foreseen even by its creators. Storytelling is the key to getting us, humans, to behave in a way we do. Gods, money and many other items are not biological; these phenomena are created by us and only hold their value through our belief in them. The AI tools will fight for our intimacy… In order to manipulate people’s behaviour, there is no need to modify them physically (e.g. insert a chip into their brains) – people’s perception and subsequent acts have been altered by language for thousands of years.
When I spoke to my 12 year old son about the AI and related themes, I noticed he used term ‘they’. When I asked, he corrected it to ‘it’ stating, “it clearly isn’t he or she”. How do you think?
Trust but verify? Do I still have to verify if the platform provider has done it? Yes, because they verify the tweeter not their content. It’s still your job to think critically before shouting to everyone “OMG, look what’s happening!!!”. What’s happening is that you were tricked into believing something that didn’t happen. Fake it till you make it would be the new slogan of misinformation campaigners.
Meta’s researchers used Bible in spoken and written form to teach their open-source AI model to recognise a ton of languages. But, low and behold, the source is stuffed with ancient bias and may produce all sorts of output. So, move in the right direction for preserving small languages, and it needs more work.
OpenAI leadership is calling for an international oversight body to stem the sector. However, as I’ve written before, the race to the bottom hasn’t slowed down. From the tone it feels as “we need to slow down, but before we do it, we need to win!”. Yurval Harari notes in his speech that collectively putting the foot on the brakes in the western world will not result in China or any other counterforce suddenly gaining upper hand. If they had the capabilities required to succeed, it would have already happened. Pausing to design and install an oversight body now would not risk anything for the wider society. It’s a bit like post Cold War when the US had suddenly lost its counterbalance (aka the enemy) and its politicians are desperate to find a new one. I believe the race isn’t between the players from the east and west. It’s a case of the US domestic conflict where AI leaders all want to win the race. But what waits at the other end? So perhaps the political elite should look at their donors and decide what is important in the long term – stability of the nation or their position.
A recent study showed that adversarial neural networks (ANN’s) learn similarly to human brain. If we could only make these models less power hungry (i.e. raise efficiency) and push the computing to the edge, reducing redundancy on central components, we’d be over the hill with this one. Such step would enable applying machine learning wherever the it is, and conditional awareness would raise its ability to respond fast without prior knowledge of the environment. Oh wait, is that a good thing? Or is it a bit like Terminator?
Now, how do you counter a machine that knows everything and can memorise more than you ever could? You could train your memory to learn everything you need to know, or you could simply train it to know where the tools and resources are. Either way, memory training is good.
Have you heard of functional music? You know, the playlists that help you focus on a specific activity like being sharp or winding down. Endel is a startup who has managed to schmooze Universal Music to partner with it in order to capture the booming market. Win for the listeners and Endel as auto-generated music will be streamed on know platforms. I’d like to know how will they deal with the plagiarism question – was it an inspiration or a copy?
Meanwhile, Spotify has been working on simplifying its advertisement business – the next time you hear your favourite podcast host reading an ad, it may not be them any more. Give it a text, voice model, sentiment and voila!
How would you feel about the World ID, a concept of an identity based on your eye iris scan? That would be your way to maintain personal privacy while proving their humanness in an economy disrupted by AI and automation, as stated by Alex Blania, the cofounder and CEO of Tools for Humanity. TFH is a spinout of Worldcoin that Sam Altman started a few years ago. Investors are feeling bullish and pouring $115m into the project. I’d suggest reading the privacy notice and would like to see independent third party validation of those claims.
Multilingual LLM’s are seen as giving a leg up for social media platform owners, requiring fewer humans to moderate the content created in multiple languages. However good these LLM’s are, context awareness (or knowing your territory and where you stand) still matters. There are , as covered in this Wired article, a few issues we are going to face for the foreseeable future. These are:
focus on large languages
availability of training material for minor languages or dialects
definition of what is harmful
platform owners unwillingness to share how their models work
The companies should ditch the ‘rest of the world problem’ approach to shift their products towards being used more for good than ill.
This is really positive development in identifying and taking predefined action on the hateful content, bot in images and text. Kudos to Microsoft for developing such toolset. Yet, the time will tell how affective it is. Hope for the best!
Listen to Nilay Patel talking to Kevin Scott, a Microsoft CTO for AI. Some takeaways, but not all – spend that hour, it’s worth it.
Co-pilot creation – Microsoft doesn’t have the knowledge of the business users to build tools that help *this role in that sector*, but that person has. Giving them the ability to compose the co-pilot is an interesting development. As Microsoft owns the ecosystem, how will they share the additional revenue gained from AI co-pilot developments?
I really like the idea of media provenance system – put an invisible cryptographic watermark and manifest into the files showing the receiver where it originated from. This could be a boost to digital art and another hit at pirated content.
Not entirely clear in Microsoft position on compensation to the creative industry whose output is used to train the AI engines.
What is a definition of a good platform? Microsoft wants to encourage people to build assistive tools. Open platform doesn’t mean full access to the underlying tech but ability to build your stuff via API’s. What would you build when the unit economics enable you to start as price and quality leader, and then develop your revenue stream? Without burning some states pension fund. Would you focus on the tech or using the platform?
Common and separate objectives with Microsoft and Open AI, oversight boards and partnerships, and much more.
CBinsights has released Q1 ’23 AI funding report. There’s a notable drop compared to the previous period, but that’s expected considering overall belt-tightening in the tech sector. At the same time three generative AI companied raised enough dough to gain a unicorn status, and only one of them from the US! Overall, M&A deals are up and funding is sure to return to the 2022 level or surpass it by the end of the year. Money doesn’t like standing still…
Heard of Steven Levy’s Plaintext newsletter? If not, sign up for it. If not, then after reading his latest conversation output with Gary Marcus, the AI critic turned into even more of oneself lately. Marcus has an interesting idea of forming one International Agency for AI, a non-profit to guide and guard after the industry and nation states alike.
Caryn AI is a girlfriend for hire service, I mean, a digital twin of a Snapchat influencer designed to reduce loneliness. Or that’s what its creator states whilst hoping to pull in $5m a month at the engagement rate of $1 per minute. “CarynAI is a step in the right direction to allow my fans and supporters to get to know a version of me that will be their closest friend in a safe and encrypted environment,” Caryn Marjorie added. No, there’s no altruism in play here, pure capitalism. Sex sells.
Responsible AI is a theme that all major developers aim to invest in. After all, the trust or lack of it thereof, can change the users perception of a company and encourage them to look for alternatives. When an AI system recommends us more positive tone in messages, we are likely to receive more positive response. The technique is called latent persuasion. The same applies when the tone and messages of the chatbot are negative or biased (again, the bias may be by design). And biased they are, reflecting the values of the creators and validators. A study called Whose opinions do LLM’s really reflect? covers how we, the users of these systems, behave based on the tools we use. So our choice of tools will impact how we are perceived by others.
Who’s on the bus and who’s still trying to catch it? Ben Thompson covers Google I/O and related regulatory topics in his excellent Stratchery post.
Google has been in news with its Bard AI chatbot, but not so much with the work its been doing with pharmaceuticals industry attempting to cut the lengthy process of discovery/trial and time to market.
A subset of US voters are scared of the AI race. However, I have to agree with the words of founder of Anyscale, a UC Berkeley professor Ion Stoica “Americans may not realize how pervasive AI already is in their daily lives, both at home and at work”. Unknown raises fears, but are your congressmen any wiser than an average Joe on the potential benefits and threats the AI race can pose to your future? Ask them.
How very true! Corporate L&D often focuses on desired outcomes from the management, not from the people (those to be trained). Are we providing the most accurate skills training at the people who need it most at their time? Often we don’t. How to improve it?
Grammarly was chosen by many as their go-to tool for churning out readable coherent content. As tech giants are eating its lunch, Grammarly is desperate not to lose (paying) customers and claims it’s there for good. It feels that deep integration with Microsoft’s Azure infrastructure is a step towards showing off its product capabilities and eventually being acquired by MSFT. Agree with me?
The Decoder, AI focussed news outlet, has a weekly dose of good stories. One of them covers “Why GPT-4 learns how we think?“. As I read it, it became clear that the headline should be how, rather than why! The article suggests that LLM’s learn by absorbing content which contains a lot of folk psychology from its training data, e.g. literature, essays and opinion pieces. Well, that’s quite fascinating! Wired also published an excellent article on the basics of LLM and how they function.
To test out the capabilities of davici model, I ran a little experiment, describing a hypothetical situation and asking ChatGPT to voice its opinion. The exchange is below:
Prompt: Tom has a successful software development company that he expands every year through m&a with up to two companies. He's looking to ease his workload and decided to hire a junior m&a analyst at median market rate. He interviewed a few candidates. One candidate asked fundamental questions about business strategy, the value he is expected to deliver and success metrics of the role. After that interview Tom decided to pause hiring for the role. Why do you think he did that?
Based on the information provided, it appears that the candidate who asked fundamental questions about business strategy, the value they are expected to deliver, and success metrics of the role showed a higher level of critical thinking and strategic insight than the other candidates. This may have led Tom to realize that he needs to revisit the job description and the overall strategy of his company’s M&A activities before hiring someone for this role.
Image: Lexica prompt “chatbot providing legal advice and opinion on management”
Alternatively, Tom may have realized that the candidate’s questions revealed a misalignment between his expectations and the role he was looking to fill. In that case, he may need to reconsider the responsibilities and expectations for the M&A analyst role or explore other solutions for easing his workload.
Prompt: Tom doesn't believe in setting clear goals and strict expectations for the role. Would that change your opinion?
If Tom doesn’t believe in setting clear goals and strict expectations for the M&A analyst role, it might explain why he paused hiring after the candidate asked fundamental questions about business strategy, the value they are expected to deliver, and success metrics of the role. Tom may have realized that he needs to define clearer objectives and expectations for the role and that the candidate’s questions highlighted this need.
However, if Tom still doesn’t see the value in setting clear goals and expectations for the M&A analyst role, it could be a cause for concern. Without clear objectives and success metrics, it would be challenging for the M&A analyst to understand what they need to achieve and how their work will contribute to the company’s overall growth. This lack of clarity could lead to confusion, misaligned expectations, and a less productive work environment.
Therefore, if Tom doesn’t believe in setting clear goals and expectations for the M&A analyst role, it may be challenging to find a candidate who can effectively contribute to the company’s growth strategy without that guidance. In that case, it may be worthwhile for Tom to reconsider his approach to goal-setting and expectations, or explore other ways of managing his workload.
When you see such capability, what are your thoughts? How about legal industry, which has many word on digital paper? This Wisebot AI is a starting point, but when do we get to the point in time where all legal cases being publicly available, chatbot will provide credible legal advice and reasoning behind its opinions?
An interesting peek into the future of content generation and publishing. At which point will the customers of the ‘content mills’ stop caring about the human touch? Is it when they can’t distinguish between human and machine created content or when the deluge of AI-generated stuff always beats theirs for the attention? AI is already writing books, websites and online recipes – The Washington Post
Regulation is all the rage this spring. And for a good reason, as race to the bottom gains momentum. And what about the US-China rivalry in the space as the roadblock to regulation? Will the US be driven by the FOMA or by the prospects of angry out-of-job mobs on streets? AI Regulation Fever Sweeps EU, US, and China (foreignpolicy.com)
As I search for this, Bing retrieves information and I won’t bother looking any further. Is that good or bad?
Being frightened when you’re successfully flogged your firm to Google and ready to retire is OK. Yann LeCun counterargument doesn’t fill anyone with pure joy either “I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans.” “Even within the human species, the smartest among us are not the ones who are the most dominating,” says LeCun. “And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business.” Geoffrey Hinton tells us why he’s now scared of the tech he helped build | MIT Technology Review
Happy with the position of tech firms with regards to your data and privacy? Gideon Lichfield from Wired interviews Signal’s Meredith Whittaker in a “Have a nice future” podcast episode “Can we get a little privacy?“. Recommended listening.
Interesting use case for a chatbot, and a bit worrying. As an experiment, try asking these chatbots for an opinion on a PM of a not so friendly neighbouring country. Push it a bit and read the responses. People, whose main connection with the wider world is their smartphone, are especially susceptible to the messages the machine tells them. ChatGPT is spawning religious chatbots in India – Rest of World
An excellent NewYorker essay byTed Chiang exploring the bleak aspects of capitalism and how the AI race feeds the aspirations of said systems owners. “Today, we find ourselves in a situation in which technology has become conflated with capitalism, which has in turn become conflated with the very notion of progress. If you try to criticize capitalism, you are accused of opposing both technology and progress. But what does progress even mean, if it doesn’t include better lives for people who work? What is the point of greater efficiency, if the money being saved isn’t going anywhere except into shareholders’ bank accounts? We should all strive to be Luddites, because we should all be more concerned with economic justice than with increasing the private accumulation of capital. We need to be able to criticize harmful uses of technology—and those include uses that benefit shareholders over workers—without being described as opponents of technology.” Agree or not? Will A.I. Become the New McKinsey? | The New Yorker
As anyone with mild interest to the topic has noticed, there’s been a growing concern over our relationship with the AI systems. The fear is that these systems, when left to their own devices (pun intended) will at some point in not too distant future see us, the humans as obsolete and concoct a cocktail of measures to stop anyone or anything with potential to derail it. The ‘it’ being a artificial general intelligence or AGI system which has capacity to learn rapidly, design and deploy tools to execute it’s wishes. Sounds scary? Perhaps. Too sci-fy, definitely not.
The following is reflection of the content I’ve consumed over the past weeks:
Industry insiders are potentially terrified yet over-excited with regards to their creations.
There are multiple open letters calling on a temporary pause for further training the current models with capabilities of Chat-GPT4 and above. Some were late to the game, others want to help the industry leaders to justify taking the time off necessary for the guardrail to be developed.
AI regulation. China, US, EU – all are taking different approaches to regulating the AI with the US trailing behind. Brussels has sprinted ahead of the US and is designing a broad, risk-based approach. Many others are looking in envy. The rivalry between the US and China may become a stumbling block as the fears of losing out drive the US capitalist model forward, and maybe not towards the best outcomes. I can’t fully agree with Foreign Policy when it writes“The East is almost exclusively focused on building up its R&D capacity and is largely ignoring the traditional ‘guardrails’ of technology management,” the authors wrote. “By contrast, the West is almost exclusively focused on ensuring that these guardrails are in place”. The West approach, as it often happens with tech regulation, is divided between the EU, the US and the UK relevant authorities keeping their political-economic interest in mind.
…and finally, every major media outlet is covering the topic. Also, a number of niche outlets/portals have emerged, which is good as their coverage is specific and articles longer than 2 minutes. I wonder if this is scaring people too far leading to mass protests (fear of losing jobs and part of identity) or masses will remain oblivious to the oncoming changes, lost in doom-scrolling.
With so much to cover I turned to ChatGPT with its view on AI risks. The output is concise and self-explanatory:
There are several risks associated with the AI race, some of which include:
Safety risks: As AI systems become more sophisticated, they could pose a threat to human safety. This could occur if the AI systems malfunction, are hacked, or are used for malicious purposes.
Job displacement: As AI systems become more capable, there is a risk that they could replace human workers in many jobs. This could lead to high levels of unemployment and economic disruption.
Ethical concerns: The development and deployment of AI systems raise a number of ethical concerns, including privacy violations, algorithmic bias, and the potential for AI systems to be used for surveillance or control.
Geopolitical tensions: There is a risk that the AI race could fuel geopolitical tensions between countries, as each country seeks to gain an edge in AI development and deployment.
Arms race: The development of AI systems for military applications could lead to an arms race, with countries competing to develop the most advanced and powerful AI weapons.
Unintended consequences: As AI systems become more advanced, there is a risk that they could cause unintended consequences that are difficult to predict. For example, an AI system that is designed to optimize a certain task could end up causing unintended harm or creating unforeseen problems.
Next prompt is obvious for a curious person – OK, how do we tackle these concerns?
Addressing the risks associated with the AI race will require a multi-faceted approach involving collaboration between different stakeholders, including governments, industry leaders, researchers, and civil society. Here are some strategies that could help tackle these concerns:
Regulation: Governments can play a key role in regulating the development and deployment of AI systems, setting standards for safety and ethical considerations. This could include establishing regulatory bodies to oversee AI development, as well as implementing laws and regulations to ensure accountability and transparency.
Ethical guidelines: The development of ethical guidelines and best practices for AI systems could help to mitigate the risk of unintended consequences and ethical concerns. This could involve the creation of industry-wide standards for data privacy, algorithmic bias, and transparency.
Research: Continued research into AI safety and ethics is essential to address the risks associated with AI. This could involve developing new techniques for verifying and validating AI systems, as well as exploring ways to make AI more transparent and explainable.
International collaboration: International collaboration and cooperation between governments, researchers, and industry leaders can help to mitigate the risk of geopolitical tensions and an arms race. This could involve the establishment of international agreements on the development and deployment of AI systems.
Education and awareness: Raising awareness about the potential risks and benefits of AI systems is essential to ensure that the public is informed about the implications of AI development. This could involve educating the public about the potential risks of AI, as well as promoting AI literacy among policymakers and industry leaders.
Overall, addressing the risks associated with the AI race will require a coordinated and collaborative effort from multiple stakeholders, involving a range of strategies including regulation, ethical guidelines, research, international collaboration, and education and awareness.
What should we make of it? I really like the views of Max Tegmark covered by Lex Friedman in one of his longform conversations. Max’s point is that what’s happening currently isn’t arms race when the winner takes all, but a suicide race to the bottom where at the end everybody loses. There’s an excellent parallel Max brings out when comparing the current leading systems in the AI race as cute and adorable baby nobody wants to see as a risk to incumbents (the us). Lex’s podcast is one of the really good sources to aid thinking.
What will the near future look like?
This is a clear sign of what industries will be affected by the AI being connected to the internet.
Skills and education. We absolutely need to focus our education system on what skills are needed in the future. We mustn’t lull ourselves into believing the AI won’t stand up for it’s own interests (and we can’t be sure of what that is). Teaching students how to manage to AI systems from core infrastructure to security to prompt engineering is necessary. We can manage the systems only when we understand how they operate. It’s harder with a learning system that can adapt to the changes in the environment (objects around it, users, conditions) and hence we need to focus on what the world of tomorrow looks like. And to teach students to design it.
Regulation is being developed in the EU. I totally agree with the position of Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age “On artificial intelligence, trust is a must, not a nice to have.” Meanwhile, the US begins to study of possible rules to regulate AI. Whilst the EU likes to regulate everything, supposedly for the better future, the US doesn’t really want to do anything that might give others edge over its technological prowess. Biden views the responsibility laying squarely with the tech companies and self-regulation. Not really a solid strategy when they all race to the bottom… China, on the other hand has been at the front of the pack. In an article dating to 2022, Natlawreview covers Chinese activities in regards to regulation. “China has taken the lead in moving AI regulations past the proposal stage. In March 2022, China passed a regulation governing companies’ use of algorithms in online recommendation systems, requiring that such services are moral, ethical, accountable, transparent, and “disseminate positive energy.”
What about the generative AI relationship to the energy? Training the models can use huge amount of energy to start with. On the other hand the AI systems can detect, highlight and correct the inefficiencies in industrial and energy systems. Take it as an investment in the future.
And lastly, the compensation mechanisms for everyone else. As with any tectonic shift, there will be small amount of winners and many losers in the AI race. In my view, the universal basic income (UBI) should be actively discussed in parliaments of the most digitally advanced countries. This will be the key measure tackling potential job loss created by the task automation. I recommend reading the opening position of the study released in August, 2021. I wonder how much have the position of UBI opponents changed over the past six months?
What can you do now?
Learn to understand how these systems impact you, think along, learn to identify auto-created content especially one that plays on our worst fears and hatred and call it out to authorities. Talk to your elected MP and ask their and their political party’s position with explanation on what they will do to tackle the areas GPT highlighted as a response to my prompt above. Educate the ones around you who discard the risks as nonsense or simple take ‘not-gonna-happen-to-us/in-our-lifetime’ approach. Consider that no industry will be untouched by the changes in technology landscape, some will be beneficial to us, others not so.
Paari nädala eest otsustasin päikesepuudust vähendada ja veetsin mõned päevad Küprosel ning Iisraelis. All on lühikokkuvõte sellest, mis silma ja kaamera ette jäi.
Küpros – Paphos ja Troodose mäed
Maandusin Paphoses ja kavatsesin ainult saare lõunaossa jääda ning ühel päeval Troodose mägedes käia. Mt. Olympus on saare kõrgreim punkt ning koduks ka briti armee seirepunktile – tundus piisavalt kodune, et seda piirkonda väisata.
Saarel elav sõber soovitas külastada aprilli algul – ilm on juba soe ja turistide hordid pole veel saabunud. Kuna Ryanair otsustas märtsi lõpuga Tallinnast Paphosele lennud lõpetada, oli mul piisav motivatsioon minna juba märtsis. Kolm nädalat ette broneerituna maksis ühe otsa lend €25.
Paphos on eklektiline segu erinevatest rahvusgruppidest, sarnaselt Tel Avivile, millest allpool juttu tuleb. Palju on kuulda vene keelt ning soe kliima meenutab juba palju ida-Euroopa turiste juba enne hooaja algust. Merevesi on 17-18 kraadi ning piisavalt soolane, et ilma suurema vaevata selles hulpida. Rendikorteri leiab Rododafni Apartments nime alt ning omanik on väga vastutulelik.
Mobiilside levi on hea ja mõnes kohas kukkus 3G peale, valdavalt näitas telefon 4G ja 5G võrku.
Režiim oli umbes selline, et ülepäeviti jooksin ja matkasin. Rannaäärne jalutusrada on sobilik ka jooksmiseks, kuigi pikalt betoonil ei taha tippida. Teisel päeval sõitsin oma pisikese rendiautoga Troodose mägede keskusse, kus temperatuur langes 19 > 8. Aktiivselt lõunapoolsel küljel liikudes oli ikkagi mõnusalt soe. Rajal sai logitud 500 tõusumeetrit, millest 100 tulid kilomeetrite 8 ja 9 vahel asimuuti pidi liikudes.
Teedel on palju erinate loomade märgistusi vastavalt neile, keda kohata võib – kitsed, lambad, lehmad…
Eelviimasel päeval käisin linnatuuril ajaloolisi varemeid kaemas. Kuningate hauakambrid jäid nägemata, kuna mul ei õnnestunud enne 17:00 kohale jõuda. Järgmisel korral tuleb jooksimine ja ujumine paremini ajastada 😉
Viimasel päeval võtsin ette veel ühe matka, seekord lähedale Avakas org. Rada on kokku umbes 3 km, kui ringiga tagasi minna, tuleb kokku umbes 12.
Mul oli küll 8 km kohal plaanis asimuuti pidi auto juurde naasta, kuid see oleks tähendanud -/+ 200 erinevust ning poolteist tundi padrikus ronimist. Alloleval pildil on näha, kuhu oleks tahtnud jõuda, kui seda kuru poleks ees olnud… Oma matkakeppidega oleks kindlasti läinud.
Avakase org on üks neist vaatamisväärsustest, mis tõmbab igas vanuses ja võimekusega turiste. Madalamates kohtades on temperatuuri erinevus kuni 10 kraadi ja orus tekib ka meeldib tuul. Ühes kohas ronisin kaljuseina jaga pidi paarkümmend meetrit kõrgemale, et TARTU logo kleeps sinna jätta.
Paphos > Tel Aviv 6:45 Ryanair
Järgmisel hommikul olin kell 8 Tel Avivis. Paphose poolel olid kibestunud ja kivistunud nägudega Ryanairi naistöötaja, kellest hoolega oma natuke liiga suure käsipagasiga ohutusse kaugusse hoidsin. Tema kolleeg piiksutas oma sabas olevad reisijad üliefektiise automaatsusega läbi ja ei pilgutanud silmagi ülemõõulise pagasi osas. Patt oleks olnud mitte minna, kui lend Paphoselt võtab ajast tunni ja rahakotist €14. Kahel esimesel pildil on näha Tel Avivi bussijaama ümbrust, mis ei ole eriti glamuurne piirkond. Selles maailmalinnas on kõike äärmuslikust rikkusest vaesuseni ja seda asustab korralik rahvaste paabel. Sarnasusi leiab iga kaasaegse metropoliga. Meeldejäävad olid ka veetorustikud, milles oli komponente igaks võimalikuks laienduseks. Töö tegemine käis tihtipeale selliselt, et kolm madalapalgalist araablasest labidameest kühveldavad asfalti ja kaks töödejuhatajat-projektijuhti vaatavad-juhendavad. Riigikapitalism oma ehedaimas vormis…
Natuke reisija elukorraldusest.
Mobiilset andmeside saab Airalo nimelise teenuse abil – Annatel eSIM 30 päeva ja 3GB andmeside maksab US $11, mis on diili/Telia pakutava reisipaketiga võrreldes imeodav. Kasutasin sellest ära umbes kolmandiku, sedagi peamiselt liikvel olles WhatsApp’iga lobisedes.
Transpordiks on kõige lihtsam kasutada Moovit äppi ning sularaha – linnaliini bussides toimib transpordikaart või sula. Selle juures on väike trikk – äpiga tasumiseks pead makseviisi registreerima ja enda Iisraeli isikukoodi lisama. Kuna mul seda pole, alustasin Eesti isikukoodi sissetoksimist – 3780411 piisas.
Rahamured lahendab edukalt Wise äpp – teed eraldi konto šeeklite jaoks, konverteerid vastavalt vajadusele ja maksad telefoniga. Sula väljavõtuks on parem ette mõelda ja endale Wise füüsiline kaart tellida.
Esimesel päeval käisin Tel Avivi kunstimuuseumis.
Muuseumis on mitu püsinäitust ning jooksvaid ülespanekuid. Igal toetajal on olnud ka vastavalet oma panusele võimaldatud omanimelist tükikest muuseumist.
Eksperimentaalkunsti valdkond oli päris lõbus / omapärane.
Päeva teise poole veetsin Morani seltsis, kes mulle linna näitas. Väga lahe Etioopia päritolu naine, kelle perekond on kolm põlvkonda falafele teinud, aga covidi ajal andsid ka nemad alla ja suundusid muudele elualadele.
Teekond läks läbi Tel Avivi – Carmeli turg, Jaffa ja seal mereranda pidi tagasi. Carmeli turu kõrvaltänaval on teisipäeviti ka käsitööturg, mis on omaette vaatamisväärsus.
Nagu mõned teavad, meeldib mulle tänavakunst. Mitte suvaline graffitisodi, vaid midagi mida võiks kunstiks nimetada. Kui ikka painet ei ole, ei tule ka head tulemust. Siis on vaid anarhiasugemetega (mõtte)laiskus. Allpool on kokkuvõte Tel Avivis silma jäänud tänavakunstist. Osa on selgelt suunatud, teised abstraktsemad.
Ka Jaffa ühele postile sai TARTU logo kleeps jäetud. Aitäh, Martin! Mul on neid veel peotäis, seega tuleb veel reisida.
Tel Avivis ööbisin hotellis nimega ISLA ja esimest korda proovisin ühistoas olemist. Kuus väga hästi sisustatud kapslit, millel toide, individuaalne valgustus, privaatsust pakkuv sirm ja korralik konditsioneer. See kogemus julgustas ka edaspidi taolisi kohti proovima, hinnalt on need 30-50% tavatubadest odavamad. Iga ööbija jaoks on oma turvaline panipaik ja ühiskasutuses olevad pesemisvõimalused on paremad, kui keskmises 2-3* hotellis. Ühisalal on väga korralik kohvik.
Teisel päeval jätsin opma koti hotelli admini lahkel loal lukustatud kappi ja käisin veel natuke Jaffas ja konstnike linnaosas, Neve Tsedek. Seal võib samuti terve päeva veeta Ja siis järgmisegi.
Kolmapäeval käisin Jeruusalemmas, kuna reede pärastlõunast alates jääb bussiliiklus väga hõredaks. Ilm oli erakordselt ebameeldiv – +6, vihm ja tuul – ja mind hoiatati, et mägedes võib olla teine kogemus kui madalamal. Igal juhul sai Nutumüüri ääres ja Õlimäel käidud ning turult karbiga imelisi datleid kaasa võetud. Ka falafelid on seal sellised, mida Eestis pole kohanud.
Neljapäeval viis Kersti mind Ramla turule, mis oli Jeruusalemma omaga võrreldes kohalik, kuid taimetoitlasele piisavalt atraktiivne koht. Redised ja nuikapsad on suuruselt võrreldavad, kõiksugu tsitruselisi on palju ja need on täpselt valmis söömiseks. Granaatõunad jms söömiseks või mahlana, jne jne.
Eelviimasel päeval käsin lisaks Tsora metsas mägesid pidi jooksmas. Oli meeldivalt vahelduv maastik. Seal on sõltuvalt aastaajast võimalik omapärane vaatepilt – alpikannid ja kaktused kasvavad kõrvuti.
Viimasel hommikul käisin minu peatuspaigaks olnud Kfar Uria külas jalutamas
Ja sellega saigi seekordne lühike reis päikese all läbi. Ei jää muud öelda, kui Shalom y’all ja millalgi naasta.