AI in the near future and your response to it

As anyone with mild interest to the topic has noticed, there’s been a growing concern over our relationship with the AI systems. The fear is that these systems, when left to their own devices (pun intended) will at some point in not too distant future see us, the humans as obsolete and concoct a cocktail of measures to stop anyone or anything with potential to derail it. The ‘it’ being a artificial general intelligence or AGI system which has capacity to learn rapidly, design and deploy tools to execute it’s wishes. Sounds scary? Perhaps. Too sci-fy, definitely not.

The following is reflection of the content I’ve consumed over the past weeks:

  • Industry insiders are potentially terrified yet over-excited with regards to their creations.
  • There are multiple open letters calling on a temporary pause for further training the current models with capabilities of Chat-GPT4 and above. Some were late to the game, others want to help the industry leaders to justify taking the time off necessary for the guardrail to be developed.
  • Education programmes are being adapted to the changes in tech landscape to aid with surge in demand for the skills our education system currently fails to cater for.
  • AI regulation. China, US, EU – all are taking different approaches to regulating the AI with the US trailing behind. Brussels has sprinted ahead of the US and is designing a broad, risk-based approach. Many others are looking in envy. The rivalry between the US and China may become a stumbling block as the fears of losing out drive the US capitalist model forward, and maybe not towards the best outcomes. I can’t fully agree with Foreign Policy when it writes “The East is almost exclusively focused on building up its R&D capacity and is largely ignoring the traditional ‘guardrails’ of technology management,” the authors wrote. “By contrast, the West is almost exclusively focused on ensuring that these guardrails are in place”. The West approach, as it often happens with tech regulation, is divided between the EU, the US and the UK relevant authorities keeping their political-economic interest in mind.
  • …and finally, every major media outlet is covering the topic. Also, a number of niche outlets/portals have emerged, which is good as their coverage is specific and articles longer than 2 minutes. I wonder if this is scaring people too far leading to mass protests (fear of losing jobs and part of identity) or masses will remain oblivious to the oncoming changes, lost in doom-scrolling.

With so much to cover I turned to ChatGPT with its view on AI risks. The output is concise and self-explanatory:

There are several risks associated with the AI race, some of which include:

  1. Safety risks: As AI systems become more sophisticated, they could pose a threat to human safety. This could occur if the AI systems malfunction, are hacked, or are used for malicious purposes.
  2. Job displacement: As AI systems become more capable, there is a risk that they could replace human workers in many jobs. This could lead to high levels of unemployment and economic disruption.
  3. Ethical concerns: The development and deployment of AI systems raise a number of ethical concerns, including privacy violations, algorithmic bias, and the potential for AI systems to be used for surveillance or control.
  4. Geopolitical tensions: There is a risk that the AI race could fuel geopolitical tensions between countries, as each country seeks to gain an edge in AI development and deployment.
  5. Arms race: The development of AI systems for military applications could lead to an arms race, with countries competing to develop the most advanced and powerful AI weapons.
  6. Unintended consequences: As AI systems become more advanced, there is a risk that they could cause unintended consequences that are difficult to predict. For example, an AI system that is designed to optimize a certain task could end up causing unintended harm or creating unforeseen problems.

Next prompt is obvious for a curious person – OK, how do we tackle these concerns?

Addressing the risks associated with the AI race will require a multi-faceted approach involving collaboration between different stakeholders, including governments, industry leaders, researchers, and civil society. Here are some strategies that could help tackle these concerns:

  1. Regulation: Governments can play a key role in regulating the development and deployment of AI systems, setting standards for safety and ethical considerations. This could include establishing regulatory bodies to oversee AI development, as well as implementing laws and regulations to ensure accountability and transparency.
  2. Ethical guidelines: The development of ethical guidelines and best practices for AI systems could help to mitigate the risk of unintended consequences and ethical concerns. This could involve the creation of industry-wide standards for data privacy, algorithmic bias, and transparency.
  3. Research: Continued research into AI safety and ethics is essential to address the risks associated with AI. This could involve developing new techniques for verifying and validating AI systems, as well as exploring ways to make AI more transparent and explainable.
  4. International collaboration: International collaboration and cooperation between governments, researchers, and industry leaders can help to mitigate the risk of geopolitical tensions and an arms race. This could involve the establishment of international agreements on the development and deployment of AI systems.
  5. Education and awareness: Raising awareness about the potential risks and benefits of AI systems is essential to ensure that the public is informed about the implications of AI development. This could involve educating the public about the potential risks of AI, as well as promoting AI literacy among policymakers and industry leaders.

Overall, addressing the risks associated with the AI race will require a coordinated and collaborative effort from multiple stakeholders, involving a range of strategies including regulation, ethical guidelines, research, international collaboration, and education and awareness.

What should we make of it? I really like the views of Max Tegmark covered by Lex Friedman in one of his longform conversations. Max’s point is that what’s happening currently isn’t arms race when the winner takes all, but a suicide race to the bottom where at the end everybody loses. There’s an excellent parallel Max brings out when comparing the current leading systems in the AI race as cute and adorable baby nobody wants to see as a risk to incumbents (the us). Lex’s podcast is one of the really good sources to aid thinking.

What will the near future look like?

This is a clear sign of what industries will be affected by the AI being connected to the internet.

Skills and education. We absolutely need to focus our education system on what skills are needed in the future. We mustn’t lull ourselves into believing the AI won’t stand up for it’s own interests (and we can’t be sure of what that is). Teaching students how to manage to AI systems from core infrastructure to security to prompt engineering is necessary. We can manage the systems only when we understand how they operate. It’s harder with a learning system that can adapt to the changes in the environment (objects around it, users, conditions) and hence we need to focus on what the world of tomorrow looks like. And to teach students to design it.

Regulation is being developed in the EU. I totally agree with the position of Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age “On artificial intelligence, trust is a must, not a nice to have.” Meanwhile, the US begins to study of possible rules to regulate AI. Whilst the EU likes to regulate everything, supposedly for the better future, the US doesn’t really want to do anything that might give others edge over its technological prowess. Biden views the responsibility laying squarely with the tech companies and self-regulation. Not really a solid strategy when they all race to the bottom… China, on the other hand has been at the front of the pack. In an article dating to 2022, Natlawreview covers Chinese activities in regards to regulation. “China has taken the lead in moving AI regulations past the proposal stage. In March 2022, China passed a regulation governing companies’ use of algorithms in online recommendation systems, requiring that such services are moral, ethical, accountable, transparent, and “disseminate positive energy.”

What about the generative AI relationship to the energy? Training the models can use huge amount of energy to start with. On the other hand the AI systems can detect, highlight and correct the inefficiencies in industrial and energy systems. Take it as an investment in the future.

And lastly, the compensation mechanisms for everyone else. As with any tectonic shift, there will be small amount of winners and many losers in the AI race. In my view, the universal basic income (UBI) should be actively discussed in parliaments of the most digitally advanced countries. This will be the key measure tackling potential job loss created by the task automation. I recommend reading the opening position of the study released in August, 2021. I wonder how much have the position of UBI opponents changed over the past six months?

What can you do now?

Learn to understand how these systems impact you, think along, learn to identify auto-created content especially one that plays on our worst fears and hatred and call it out to authorities. Talk to your elected MP and ask their and their political party’s position with explanation on what they will do to tackle the areas GPT highlighted as a response to my prompt above. Educate the ones around you who discard the risks as nonsense or simple take ‘not-gonna-happen-to-us/in-our-lifetime’ approach. Consider that no industry will be untouched by the changes in technology landscape, some will be beneficial to us, others not so.

Have a nice future!

Why do I coach?

From understanding my position to setting and achieving goals.

When I coach people I start with basics – what do they want to talk about, why is it important to them and how do they know when they have reached desired outcome? On a recent session a coachee turned the question around – why do I coach, WIIFM? Why do I do it and what do I get in return?

When I joined my ICF journey in 2020 I had to provide the same answer. At the time I was a mid-level manager in charge of of department of 22 IT professionals and €4m annual budget. Four of them were software project managers whose decisions determined whether we delivered the products at timescale and remained in budget. All of the project managers relied on partner development teams. I felt that I needed to be less involved in technical elements, allowing enough time to focus on strategic objectives and developing my team. I felt I needed to improve my skills and knowledge on the second part – people leadership. By then I had devoured a trove of books on leadership, persuasion, NLP and strategic management. When I looked at the course curriculum I sensed that this combined with previous learning, a few mentors whose experience I could draw upon and practice would prepare me for the next role. Not knowing what it would be, I enrolled on the ICF ACC course.

The whole course is a self-discovery, learning techniques and practice. The latter is supported by the former, and both are equally important. Some people are naturally better listeners and are genuinely interested in others. Some can train these skills and some find it’s not for them.

Since then I have learned far more than I thought I would. Practicing coaching with people from various areas of life, both private and public sector has solidified learning and helped me to experience new perspectives. I would say that coaching has given me as much, if not more than my coachees who have found clarity in their problem fog. I have learned more about myself, what I want to do and where to develop. I believe that each coaching session gives something to both parties – coachee should get clarity in their issue/problem and coach improve the technique. Being better coach helps to get to solutions quicker and give coach more time to focus on everything else in their lives. And helps you get to the top faster.

Short answer? Interest in people, curiosity and empathy are the qualities I have identified in myself. I believe these are basis for a good leader who wants to achieve more, and through people. Continuous development leading to self-actualisation would be another way to say it.

What’s your goal? Do you know and have you mapped a route to it? How do you know how far you are?

If you are keen to know and get started, please pick a date/time combo suitable to you from my public calendar. First session is free. At the end of it you’ll know whether you want to continue.

Let’s climb that mountain together!

Half Dome, Yosemite National Park

Workplace and people

A friend penned a post on millennial workforce and currently prevalent business culture asking a number of questions at each section. I thought about it and felt need to chip in. As I do.

First things first. I think the behaviours Nicolas describes in his post do not only apply to the Generation Y and Z, they are seen to take root across the business landscape. Not everyone is directly contributing to digital economy yet many are affected by the changes it has brought about. Take any traditional trade. A brief look at its state today shows how much has changed within past 15 years. Supply chain has become global, primary distribution channel is online, delivery often by gig-economy workers who get paid per delivery and are not seen taking pride in their work as the quality suffers. Many early retirees have returned after realising the type of lifestyle their pension actually supports. Many are freelancing – not out of choice but necessity. Often they have no option but to as the organisations they work with (not for!) have their business models dependent on reduced staff overheads. Add what we sued to call “cost of doing the business” and you have no business. In some areas its global trade, in others high business rates. We have moved from stable, permanent positions to short term contracts. Many of us who have spent around 20 years working have changed their jobs three times at least, some even more. Even those of us on permanent positions don’t tend to stay with the organisations for more than three years on average. Careers built merely on longevity are out, sharp minds and clear objectives in. Or at least should be so. We are likely to see inequality in workplace for some time until the Big Reset comes. And it will come, either in form of Universal Income or nationalisation of (by then still traditional) industries.

I personally favour UI route. When set at 70% minimum wage it will enable people to just get by (on council property – hey, different topic!) and top up their earnings by freelancing and working with the organisations of their choosing. Some argue that it should be minimum wage, though latter camp will have hard time standardising this even in EU context (€1400 as minimum in France is above average in Estonia). Money will be digitised and all income over certain threshold is taxed as now, hopefully reducing incentive and options for fraud. Getting rid of physical money will also reduce the asset ownership cost to central bank and thus should again leave more to fund UI. Quartz @ Work has a very timely piece on full employment and fulfillment. Full employment is felt as cornerstone for Western society and people find usually hard explaining the gaps in their careers. Instead being out of work should choice when people feel they need a break followed by successful return to work provided people have necessary skills and attitude to perform as expected.

The themes Nicolas writes about are well covered by many – empowerment, ownership, flexibility, purpose, opportunities and new types of work. Let’s look at each once more then.

Many, not just younger expect to be empowered to make and have ownership about their decision making and outcomes. They expect to be treated as equals. Not equally capable and experienced but to get equal opportunities. Many have argued, especially about apprenticeships schemes that it’s all about them and not us, the employers. But this statement is untrue. The young, when motivated and allowed to make small mistakes, learn from them and not be punished will pay back with energy they have and willingness to throw all they have to complete the project on time. They are willing to shed that shy self in order to achieve the deadlines. Many more seasoned colleagues would try to delegate the task to someone else and stay in mediocristan. Working with apprentices 18 years ago in my own small IT business and recently with fast -streamers has shown me time and again how much value these young people can deliver with right level of coaching, delegation and independence. But wait, this applies equally to more seasoned employees as well. To ensure they don’t actively avoid decision making and taking ownership however, the organisation need to have reached necessary level of maturity. Not quite teal level, but micro-management must be out and trust in.

Flexibility in workplace is nothing new. Also not new is the notion of flexibility when it comes to choosing the place of work. I have a few friends who have been working from home study since mid-nineties. Fine, their jobs enabled this (editor, consultant, marketer, software programmer) but were never seen as revolutionary, rather as their choice. What is new is not just where but when we choose to work. Dan Pink spoke in recent RSA event about timing. I can attest to his conclusion of timing the work. There are generally three stages – peak, through and recovery. In my case its a bit like this:

image

I’m usually switched on in the mornings and can stay focused for long periods of time until noon. Sometimes longer. This is the time to work on analysis and produce written content. Then comes the slump where I’n not the sharpest pencil in the box. That’s the good time for admin. Neither of the periods is suitable for meetings. When we are in focus mode, we find hard to accept others’ ideas. During the through we are simply unable to absorb any information. This is worst time for any meetings or workshops. Hence I try to schedule all my meetings (virtual or in person) either right before lunch or after 15:00. When the recovery kicks in, we are all more agreeable. This is flexibility we should grant to all our colleagues. We should deploy tools that allow people to submit the best time they are ready to collaborate in, and avoid any meetings outside this space.

You could say that people fall into two categories. First is static, second dynamic. The second crowd are after opportunities to prove how good they are. Get some testosterone going, tick that thing off the list and get dopamine kick. Feels good, right? It tends to be the younger crowd who are looking for ways to either gain some new knowledge or participate in that new venture. Perhaps it pays off. And if it didn’t, no biggie. Next time they’ll try again. What we need to encourage is looking for opportunities in the organisation. These may be incremental improvements to the process or product that drive our businessesuu forward. It’s very rare when a groundbreaking change is introduced and effectively managed to production. Th rest of the time everything is in beta. And changing. We need to create culture where risk is seen as both threat and opportunity, not just first.

Take all of these and… nothing works when people don’t have purpose in their working lives. Purpose and meaning is much coveted topic for the jobs over the threshold where increase in pay will have no effect on quality of output.

The types of work that existed in the fringe have become mainstream in Western economies and those previously taken for granted have disappeared. Manufacturing is a good example. Working for Saint-Gobain in late 90’s and first part of 21st century I saw automation and streamlining of supply chains in order to reduce the cost of product. Robotics found its way into assembly previously required highly skilled workforce performing tasks demanding precision. Need to reduce waste and not optimise but maximise output at highest level of quality will see new plants employing a handful of highly skilled operators work of many machines.

We used to cook at home and only occasionally order takeaway food. Especially in urban environment this has become mainstream – people value their time and are willing to pay for food and delivery. The delivery has often been outsourced to likes of Deliveroo and fulfilled by men on bikes searching their way through maze. They are often as lost as Über drivers. Everyone as taxi driver on their spare time? That’s not really valuing ones time, it may be seen as the only option to earn enough to live in a modern metropolis. Are these jobs going to be here in 10 years? Probably not, technology will develop along with legal framework to automate these jobs.

What will the future of work look like for us in the knowledge work? We’ll have many jobs over our working lives, quite possibly will be looking for work every few years and working for and with many different organisations. This raises need to be adaptable to the change. I thought learning enough but not mastering a single skill was not sufficient. Shallow generalist over highly skilled specialist? A recent Medium post by Michael Simmons nicely builds the case for polymath as probably best placed to survive in the unknown future. It’s not just transferable skills we need. We need to be able to synthesise useful elements from different disciplines to meet the future challenges.

 

I hope this sparks some debate.

en_GBEnglish (UK)