Inspired by the article “Deutsche Bank’s seven lead use cases for GenAI” (source behind paywall).
In a recent Risk Live Europe conference Deutsche boasted about the seven exciting GenAI themes either under development or completed. I read them and could not get excited the least. I won’t infringe with the author’s terms and will cover the main areas in my words.
AI should be used to innovate or improve the bank’s position, not just to use for press releases. It’s sad how the current buzzword GenAI can be slapped on every activity stream regardless of actual application of AI. I wonder why the efficiency gains boosting strategy “Buy and configure over build and customise” has not been adopted? The AI marketplace is moving fast and I believe the key activities should be the strategic focus in the public and corporate sectors overall – know what you have, develop data-driven decision making, change the culture by up-skilling people to use the modern tools, upgrade your processes.
Here’s a quick pessimist (a.k.a informed optimist) view of the initiatives:
A document processing system that sifts, sorts and categorises the content into structured data sets and can run predefined workflows on it. As described here, could be achieved by an OCR/content extraction for paper or otherwise non structured text documents, rules based content searches, and a few workflows – basically an RPA activity and nothing new.
Email processing – “If we can read that and turn it into data, we can automatically process it, give it to the right person and even actually respond,” said the Deutsche executive. Nothing to do with AI of any sorts. Email has been sorted and forwarded following the set rules using a CRM since… a long time ago… and now can be done by Exchange Online. Why reinvent the wheel and do it as a separate application, is beyond comprehension. Where’s the innovation?
Excel and million other random apps used in an ad-hoc manner to store client lists is a really poor BCM practice however widely exercised. Fix the underlying cause (lack of common process), not the resulting mess and you have gained your multi-million worth of hidden benefits. Else the mess keeps growing, and so does the bank’s AI team of mechanical turks.
Another tool is allegedly used to ingest data from public sources as media monitoring /fake news discovery. Either the journalist missed the point, the presenter was blowing out hot air or something is missing here. Being clever, you’ll use a media monitoring service rather than build it oneself. Then you automate your legal workflow to send cease-and-desist notices and manage the cases. Oh, that is also available as SaaS…
A digital assistant, trained on a specific form (such as 10-K and 10-Q filings made by US companies) and a dataset, has been rolled out. Take it as configured SaaS product and focus your effort on people up-skilling.
Using digital agents in customer service is probably the most valuable of the lot as, when trained on adequate data and made to respond in a natural way (caveat – in select languages only), can make a meaningful contribution to the support business unit’s bottom line.
Microsoft Copilot roll-out (personal productivity and coding enhancement) on the other hand is a procedural change (adopting SaaS service for tasks such as direct transcription of Teams calls is a config + cultural change) and not a fintech innovation.
Takeaway? Using different off-the-shelf and open source tools (some labelled as LLM, others as ML-something) to structure your unstructured data, make it discoverable/usable and use no/low-code approach to allow the data to improve the business outcome is definitely a way to go. In total, 1-2 initiatives out of seven may be actually beneficial financially and boost productivity while the others are masking existing poor practices. Some activities are given to remain profitable in the long term, others form part of the learning curve (need to be seen doing something AI related). Is this digitalising the CX and your business? Yes, to an extent. Is this an innovation? Nope, sorry.
A week of travels and many long-form podcasts later I return here to reflect on the world through technology lens. This time the focus is on regulation, future computing and models.
The US current administration is grappling with crafting a policy that suitably distinguished AI tech designed for consumer use from that for reconnoissance and offensive purposes by Chinese companies. Considering how modular and API driven the solutions functionality can be, it’s not an easy task. Also bear in mind that some 30 years ago the tech was first tested and applied by military, then government sector, then large enterprises, followed by SME and eventually consumers. Today, the consumers jump the gun first, followed by SME sector and then the rest. Innovation speed and cadence has become such that large organisations find it increasingly hard to embed new tech to their processes and working practices.
Stanford University CRFM that assesses AI foundation models has taken a look at the major market players in light of the proposed EU AI Act. They find that incumbents score poorly on the current areas of concern and stress for need to enact the Act. The findings show sharp divide between expected and actual behaviour of foundation model providers when it comes to closed models. The research recommends establishment of common standards. When talking about disclosures, I especially like the following statement “While progress in each of these areas requires some work, in many cases we believe this work is minimal relative to building and providing the foundation model and should be seen as a prerequisite for being a responsible and reputable model provider.” In short – document as you build your models and have courage to disclose what the black box does. Read the full report here – it’s enlightening 10 minutes.
The EU has launched a project to build four AI test centres to “which are virtual and physical, will from next year give technology providers a space to test AI and robotics in real-life settings within manufacturing, health care, agriculture and food, and cities.” according to Bloomberg. These should go hand in hand with and support the recent AI Act. Let’s hope, the developers are eagerly going to use these facilities.
Google is desperate not to lose its users and gain more foothold in the AI race. They recently released new secure AI framework aimed a the business customers. The principles listed are nothing new and many organisations already apply them. The AI space is no different, says Google. Either way it’s good to remind and reflect if what we do is helping us towards safer future. In other parts of Google, it has developed third party integrations within its Docs Suite (part of the “smart canvas” work) for both business and private users. The Verge has taken a peek at UX. The search giant also claims its AI chatbot, Bard, has improved its coding and maths capabilities. Some of the staff has labelled the hallucinating algorithm useless, but that won’t affect its march forward. This just makes me wonder when we get to the point where we trust our AI companions to write itself new functionality and validate it. Fun experiment when conducted in the lab environment, but tad scary if done live with access to the code repos and ability to commit.
With so much anticipation around the ESG regulation and need to increase transparency of associated topics I wonder how the foundation models providers are doing? You can measure when you know what matters and when you collect relevant data. Google, Microsoft, Amazon, Oracle, IBM and others are actively putting an effort into measuring their impact and taking action on it. I like Microsoft report as it’s built on the Sustainability Manager. What bothers a little is that smaller foundation models providers do not focus on such resource drags as measuring and reporting on their impact. Or if they are, the data is well hidden on the websites. From speaking to customers and tech providers I get the sense of urgency from larger organisations to start collecting and reporting on their performance and environmental impact. They start to understand the value of it which is not just to comply but also improve trust and business performance.
Intel has entered the quantum computing race focussing on its current manufacturing capabilities and aiming to replicate past success with silicon chips. Their competitors, however are doubtful of the chosen path and stick to theirs. The more approaches, the merrier – a single solution is never the best choice at the beginning and it doesn’t support innovation. The commercial mass availability is 5+ years away, but it’ll take us a huge step closer to AGI. Cnet covers the story here.
If you believe that Meta has been lagging behind in the AI race, think again. Even better, listen to Lex Friedman’s interview with Mark Zuckerberg and then look at the recent announcement on text to voice conversion technology – Voicebox AI. They are moving extremely fast, even so that the model won’t be released to the public in fear of misuse. At the same time, Meta has published an overview in the form of 22 system cards, detailing how Facebook and Instagram provide content to the user. The Verge provides and overview.
Patterns are everywhere, and so are the frameworks to help us make our professional lives easier. Christopher Alexander described it first in the 1970’s for architecture, and the software developers quickly saw the value in it. Here’s a brief recap of the design patterns.
When the investors were piling funding into crypto, the cautious voices asked about the value proposition (and so did I). When comparing the money funnelled into AI and LLM race, everyone wants to again fit on the boat, to be the first to invest. Inflection AI is the next darling that has no problem raising as much as they deem necessary to develop the personal AI – Pi. In Mustafa Suleyman’s (who used to run DeepMind) own words “… it’s a historic growth opportunity. Inflection’s best bet is to “blitz-scale” and raise funding voraciously to grow as fast as possible, risks be damned.”
As English is deemed as the most popular programming language, I wonder when support is extended to other major languages? It’s not easy, but the potential for me to interact with and direct AI in my native language would enable immense growth opportunities for many. I recognise, it could leave more behind and progress would probably happen in waves – tinkerers and small entrepreneurs discover something new and start using it, and then it gets turned into a platform service, available for all at a fixed cost.
Where the big tech is more careful and places guardrails around their AI-powered tools and platforms, small developers may choose not to do that. Lack of resources and pressure to avoid reputation affecting hickups are often driving their decisions on how the tools behave. NY Times article covers this topic. Some creators also cite personal responsibility and their preference for an answer and unwillingness to argue with the AI tool. If we accept the view of Eric Hartford, a creator of WizardLM-Uncensored, we’ll just build more echo chambers and division between different groups. “Democrats deserve their model. Republicans deserve their model. Christians deserve their model. Muslims deserve their model,” Mr. Hartford wrote. “Every demographic and interest group deserves their model. Open source is about letting people choose.”. I can’t agree with the suggestion that generating fake news is OK, but distributing it isn’t. If we chose not to distribute the content, it would not be generated. There would be simply no commercial value in it. To use such tools for education would be fine, however that’s not the goal of their creators.
I touched on Bing AI integration a few weeks ago. Now Microsoft Bing will build you a shopping guide when you ask it! That’s pretty amazing and I’d be very interested on its impact on the review and guide cottage-industries. Whilst many are already auto-generated low-quality gibberish, the others are very good. Notably the ones behind the paywall. Now, how will sites like Rtings and others reviewing gear use the chatbot capabilities to make their reviews so good, people are willing to pay for them?
And the last item covers rental and tenant assessment software, that is widely used in the US. According to theLever article, it’s as biased black box a many other AI powered solutions. With more reliance on data and trust of its quality there are likely more groups that will be treated unfairly. Yet I don’t think the progress can or should be reversed – the software developers need to reduce biases in their tools. The users, who need to make decisions quickly, will often look for a single score, and then move on to the next case.
When consulting, I am often asked about the IT demand, how to deal with it and how to decide what to take forward. I’ve covered the topic briefly in strategy section before. Let’s pick the topic a little.
The way IT demand is managed will depend on the established IT or delivery structure and relationship with the rest of the organisation and maturity of the processes. In short, what we do and how we do it.
It is useful to bear in mind what the business relationship management does in ITSM and value management context – identify, stimulate and shape strategic IT demand. It sets the boundaries of how the IT solutions are requested and delivered. The people in this role work across the organisation bridging the business demand with IT delivery capabilities and tracking the value delivery in the process.
I’ve worked in and with the organisations with clear processes for demand intake, triage, prioritisation, assessment, handover to delivery and later analysis of value produced. I’ve also worked for and with the organisations that have no formal arrangements in place and demand is dealt with on ad-hoc basis. The latter approach may work for small organisations and with low IT overall maturity. When the organisation grows and teams grow however, there’s need to put in place some frames. The easiest is to follow the ITIL/ITSM guidance and establish necessary roles and procedures.
I like to use HS2 as an example where we had no formal way of managing demand for IT. I’m generalising on purpose – there should be a single channel for the (internal) customers to interact with the provider (your IT unit). It doesn’t matter what type of IT product or service is requested, the service management tool or an IT process should make it easy for the customer to do so. We were tasked with establishing a process and sub-processes, necessary gates and such to ensure that:
internal customers can and will go through a single route for requesting new IT
there’s a framework for classifying demand along with investment requirements, reviews and steering groups
requests for new IT are tied to budgetary process
technology debt is effectively managed and…
value is measured
My team’s focus was on strategic demand. By interacting with the customers, however, it became clear that they may not know what they need and may be unable to understand the implications of new IT solutions. We also noted, that our colleagues may describe a desired solution, not the problem. To counter that, we borrowed a note from the UK Department of Transport IT:
“Describe a problem, not a solution.”
After some deliberation and consulting the service management team, we decided to take the next steps:
define the process, role and routines.
The process was initially very simple – capture the demand, assess and contact the requester to understand more.
Roles were defined as requester, assessor/analyst (IT Business Relationship Manager/Business Analyst), approver (the budget holder and service provider), delivery team (IT service management or project delivery)
establish a service request form in ServiceNow to capture all new demand. The form was dynamic based on the customer choices and sifted standard SR’s to service desk and the rest in to new IT demand list.
set up weekly demand review meetings for initial triage, demand correction and future customer education. Here we agreed who will be part of the initial assessment and expected outcomes.
With such an approach we were able to capture 95% of the new IT demand. By collating all the requests already shared with various parts of the IT, we managed to skim the list of 400 down to circa 120 (a lot were repeat requests and substitutes to existing services). By using ServiceNow routing and custom dynamic forms capabilities we were able to produce intelligent ‘interaction’ with the customer. They felt that they are not asked to describe the same thing over again, just clarify detail.
Once the standard ITSM process was in place, we could focus on strategic demand. For that we used a set of questions that formed basis for the conversation. An example would be here:
What is needed?
Why is it needed?
Who needs it?
Assumed cost and who pays for it?
What gets better?
When is it needed by?
Risks / opportunities?
Link to corp programme?
Other info
This was the initial assessment form and we deliberately chose not to add more information to it than needed for triage – will it go ahead or get rejected. That meant no more than 1-2 sentences per box.
As my team’s focus was on projects, we established a framework for financial and work time impact assessment. For this we set out the following criteria for assessing. Please note that not all may apply.
micro
small
medium
large
very large
Cost
Up to €50k
€50k to €250k
€250k to €500k
€500k to €1m
€1m and above
Impact
Single directorate / unit
Single directorate / unit
Multiple directorates / partners
Org wide / partners
Org wide / partners
Delivery time
Up to 1 month
Up to 3 months
Up to 6 months
Up to 1 year
> 1 year
We anticipated demand to arrive later during the year and thus encouraged our colleagues to engage early and invite us to be part of their thought process. This approach served multiple purposes:
it enabled IT to have an early signal of business planning and to conduct internal assessments early, without commitment to delivery
it helped to build trust between people and to build up business expertise within the IT
it allowed IT to promote existing services and plan necessary changes to those.
The last element, we were asked to do was the hardest – understand anticipated value and be able to measure it. Working with our colleagues from value management team we included a set of goals for any project and measures to track their impact. Each project had to provide at least one of these:
improve customer experience
improve business
streamline processes
improve data quality
meet regulatory demand
reduce duplication
reduce technology debt
For example, an initiative to introduce a CRM solution helped to improve customer experience, business processes and data quality while reducing technology debt and meet regulatory demand. In this project we transitioned from in-house bespoke system to cloud-based Dynamics CRM, applied data protection policies, designed easy routes for customers to engage with the team, trained people to use the system. We finally had a single source of truth and were able to effectively respond to FOI’s and data requests. All these were assumed benefits that had values to track against once in operation.
I recognise this blog covered more than just demand management, but the function is quite broad and for it to be valuable, not just a drag on people’s time, it needs to understand both business and IT, and be engaged with both through the demand process.
As anyone with mild interest to the topic has noticed, there’s been a growing concern over our relationship with the AI systems. The fear is that these systems, when left to their own devices (pun intended) will at some point in not too distant future see us, the humans as obsolete and concoct a cocktail of measures to stop anyone or anything with potential to derail it. The ‘it’ being a artificial general intelligence or AGI system which has capacity to learn rapidly, design and deploy tools to execute it’s wishes. Sounds scary? Perhaps. Too sci-fy, definitely not.
The following is reflection of the content I’ve consumed over the past weeks:
Industry insiders are potentially terrified yet over-excited with regards to their creations.
There are multiple open letters calling on a temporary pause for further training the current models with capabilities of Chat-GPT4 and above. Some were late to the game, others want to help the industry leaders to justify taking the time off necessary for the guardrail to be developed.
Education programmes are being adapted to the changes in tech landscape to aid with surge in demand for the skills our education system currently fails to cater for.
AI regulation. China, US, EU – all are taking different approaches to regulating the AI with the US trailing behind. Brussels has sprinted ahead of the US and is designing a broad, risk-based approach. Many others are looking in envy. The rivalry between the US and China may become a stumbling block as the fears of losing out drive the US capitalist model forward, and maybe not towards the best outcomes. I can’t fully agree with Foreign Policy when it writes“The East is almost exclusively focused on building up its R&D capacity and is largely ignoring the traditional ‘guardrails’ of technology management,” the authors wrote. “By contrast, the West is almost exclusively focused on ensuring that these guardrails are in place”. The West approach, as it often happens with tech regulation, is divided between the EU, the US and the UK relevant authorities keeping their political-economic interest in mind.
…and finally, every major media outlet is covering the topic. Also, a number of niche outlets/portals have emerged, which is good as their coverage is specific and articles longer than 2 minutes. I wonder if this is scaring people too far leading to mass protests (fear of losing jobs and part of identity) or masses will remain oblivious to the oncoming changes, lost in doom-scrolling.
With so much to cover I turned to ChatGPT with its view on AI risks. The output is concise and self-explanatory:
There are several risks associated with the AI race, some of which include:
Safety risks: As AI systems become more sophisticated, they could pose a threat to human safety. This could occur if the AI systems malfunction, are hacked, or are used for malicious purposes.
Job displacement: As AI systems become more capable, there is a risk that they could replace human workers in many jobs. This could lead to high levels of unemployment and economic disruption.
Ethical concerns: The development and deployment of AI systems raise a number of ethical concerns, including privacy violations, algorithmic bias, and the potential for AI systems to be used for surveillance or control.
Geopolitical tensions: There is a risk that the AI race could fuel geopolitical tensions between countries, as each country seeks to gain an edge in AI development and deployment.
Arms race: The development of AI systems for military applications could lead to an arms race, with countries competing to develop the most advanced and powerful AI weapons.
Unintended consequences: As AI systems become more advanced, there is a risk that they could cause unintended consequences that are difficult to predict. For example, an AI system that is designed to optimize a certain task could end up causing unintended harm or creating unforeseen problems.
Next prompt is obvious for a curious person – OK, how do we tackle these concerns?
Addressing the risks associated with the AI race will require a multi-faceted approach involving collaboration between different stakeholders, including governments, industry leaders, researchers, and civil society. Here are some strategies that could help tackle these concerns:
Regulation: Governments can play a key role in regulating the development and deployment of AI systems, setting standards for safety and ethical considerations. This could include establishing regulatory bodies to oversee AI development, as well as implementing laws and regulations to ensure accountability and transparency.
Ethical guidelines: The development of ethical guidelines and best practices for AI systems could help to mitigate the risk of unintended consequences and ethical concerns. This could involve the creation of industry-wide standards for data privacy, algorithmic bias, and transparency.
Research: Continued research into AI safety and ethics is essential to address the risks associated with AI. This could involve developing new techniques for verifying and validating AI systems, as well as exploring ways to make AI more transparent and explainable.
International collaboration: International collaboration and cooperation between governments, researchers, and industry leaders can help to mitigate the risk of geopolitical tensions and an arms race. This could involve the establishment of international agreements on the development and deployment of AI systems.
Education and awareness: Raising awareness about the potential risks and benefits of AI systems is essential to ensure that the public is informed about the implications of AI development. This could involve educating the public about the potential risks of AI, as well as promoting AI literacy among policymakers and industry leaders.
Overall, addressing the risks associated with the AI race will require a coordinated and collaborative effort from multiple stakeholders, involving a range of strategies including regulation, ethical guidelines, research, international collaboration, and education and awareness.
What should we make of it? I really like the views of Max Tegmark covered by Lex Friedman in one of his longform conversations. Max’s point is that what’s happening currently isn’t arms race when the winner takes all, but a suicide race to the bottom where at the end everybody loses. There’s an excellent parallel Max brings out when comparing the current leading systems in the AI race as cute and adorable baby nobody wants to see as a risk to incumbents (the us). Lex’s podcast is one of the really good sources to aid thinking.
What will the near future look like?
This is a clear sign of what industries will be affected by the AI being connected to the internet.
Skills and education. We absolutely need to focus our education system on what skills are needed in the future. We mustn’t lull ourselves into believing the AI won’t stand up for it’s own interests (and we can’t be sure of what that is). Teaching students how to manage to AI systems from core infrastructure to security to prompt engineering is necessary. We can manage the systems only when we understand how they operate. It’s harder with a learning system that can adapt to the changes in the environment (objects around it, users, conditions) and hence we need to focus on what the world of tomorrow looks like. And to teach students to design it.
Regulation is being developed in the EU. I totally agree with the position of Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age “On artificial intelligence, trust is a must, not a nice to have.” Meanwhile, the US begins to study of possible rules to regulate AI. Whilst the EU likes to regulate everything, supposedly for the better future, the US doesn’t really want to do anything that might give others edge over its technological prowess. Biden views the responsibility laying squarely with the tech companies and self-regulation. Not really a solid strategy when they all race to the bottom… China, on the other hand has been at the front of the pack. In an article dating to 2022, Natlawreview covers Chinese activities in regards to regulation. “China has taken the lead in moving AI regulations past the proposal stage. In March 2022, China passed a regulation governing companies’ use of algorithms in online recommendation systems, requiring that such services are moral, ethical, accountable, transparent, and “disseminate positive energy.”
What about the generative AI relationship to the energy? Training the models can use huge amount of energy to start with. On the other hand the AI systems can detect, highlight and correct the inefficiencies in industrial and energy systems. Take it as an investment in the future.
And lastly, the compensation mechanisms for everyone else. As with any tectonic shift, there will be small amount of winners and many losers in the AI race. In my view, the universal basic income (UBI) should be actively discussed in parliaments of the most digitally advanced countries. This will be the key measure tackling potential job loss created by the task automation. I recommend reading the opening position of the study released in August, 2021. I wonder how much have the position of UBI opponents changed over the past six months?
What can you do now?
Learn to understand how these systems impact you, think along, learn to identify auto-created content especially one that plays on our worst fears and hatred and call it out to authorities. Talk to your elected MP and ask their and their political party’s position with explanation on what they will do to tackle the areas GPT highlighted as a response to my prompt above. Educate the ones around you who discard the risks as nonsense or simple take ‘not-gonna-happen-to-us/in-our-lifetime’ approach. Consider that no industry will be untouched by the changes in technology landscape, some will be beneficial to us, others not so.
One of the topics that keeps coming up in the social events I go to, is an effective IT strategy.
A short answer, given by a few friends and colleagues – if in doubt, look at organisation strategy and and align with this. You can always build on that as your IT capability matures and you take on more responsibilities. I’d argue it’s more important to start with a solid vision and mission statements for your IT organisation. Relevant strategies (architecture, development, demand, delivery, operations, CSI et al) can be built on those statements. Not to be forgotten that mostly the IT is enabling function. If it doesn’t deliver the basics well enough it becomes irrelevant – either the organisation neglect it and seek help elsewhere or go out of business.
Some of the best IT strategy examples I’ve seen are displayed on a single page. There the focus is on the following items:
core purpose of IT (kind of obvious but often lost in translation)
key capabilities and operating model (what IT does and doesn’t do)
core values (how the unit behaves)
These areas provide answers to basic questions – what IT is, why it exists and how IT operates. I’ve run workshops in the past with aim of defining the IT strategy. We’ve started with the three points above using familiar language to the organisation. As an example it could read:
“The purpose of organisation X IT department is to ensure the IT systems and services consumed by the organisation meet its needs by being designed with the user in mind, adequately provisioned, secure, available and resilient. The IT department does this by developing mix of in-house and external capabilities. We partner with subject matter experts in the field to develop and support key business applications, and integrate those via API interface with supply chain and customer facing resources.”
Or something similar that is relevant to your case.