How do chatbots work and how we teach them to reason?

The Decoder, AI focussed news outlet, has a weekly dose of good stories. One of them covers “Why GPT-4 learns how we think?“. As I read it, it became clear that the headline should be how, rather than why! The article suggests that LLM’s learn by absorbing content which contains a lot of folk psychology from its training data, e.g. literature, essays and opinion pieces. Well, that’s quite fascinating! Wired also published an excellent article on the basics of LLM and how they function.

To test out the capabilities of davici model, I ran a little experiment, describing a hypothetical situation and asking ChatGPT to voice its opinion. The exchange is below:

Prompt: Tom has a successful software development company that he expands every year through m&a with up to two companies. He's looking to ease his workload and decided to hire a junior m&a analyst at median market rate. He interviewed a few candidates. One candidate asked fundamental questions about business strategy, the value he is expected to deliver and success metrics of the role. After that interview Tom decided to pause hiring for the role. Why do you think he did that?

Based on the information provided, it appears that the candidate who asked fundamental questions about business strategy, the value they are expected to deliver, and success metrics of the role showed a higher level of critical thinking and strategic insight than the other candidates. This may have led Tom to realize that he needs to revisit the job description and the overall strategy of his company’s M&A activities before hiring someone for this role.

Image: Lexica prompt “chatbot providing legal advice and opinion on management”

Alternatively, Tom may have realized that the candidate’s questions revealed a misalignment between his expectations and the role he was looking to fill. In that case, he may need to reconsider the responsibilities and expectations for the M&A analyst role or explore other solutions for easing his workload.

Prompt: Tom doesn't believe in setting clear goals and strict expectations for the role. Would that change your opinion?

If Tom doesn’t believe in setting clear goals and strict expectations for the M&A analyst role, it might explain why he paused hiring after the candidate asked fundamental questions about business strategy, the value they are expected to deliver, and success metrics of the role. Tom may have realized that he needs to define clearer objectives and expectations for the role and that the candidate’s questions highlighted this need.

However, if Tom still doesn’t see the value in setting clear goals and expectations for the M&A analyst role, it could be a cause for concern. Without clear objectives and success metrics, it would be challenging for the M&A analyst to understand what they need to achieve and how their work will contribute to the company’s overall growth. This lack of clarity could lead to confusion, misaligned expectations, and a less productive work environment.

Therefore, if Tom doesn’t believe in setting clear goals and expectations for the M&A analyst role, it may be challenging to find a candidate who can effectively contribute to the company’s growth strategy without that guidance. In that case, it may be worthwhile for Tom to reconsider his approach to goal-setting and expectations, or explore other ways of managing his workload.

When you see such capability, what are your thoughts? How about legal industry, which has many word on digital paper? This Wisebot AI is a starting point, but when do we get to the point in time where all legal cases being publicly available, chatbot will provide credible legal advice and reasoning behind its opinions?

etEesti