MHR Labs: Spotlighting the latest AI trends
Our Research team break down the latest developments in AI that we saw in 2025, and look ahead to what we can expect in 2026.
Looking back and looking ahead
As we begin 2026 (I know, it’s February already) and start looking at what’s new in the world of tech, it can be helpful to revisit what happened last year to gain some perspective. There’s a large number of summaries out there covering the news and milestones from 2025. This one from Ars Technica is one of my favourites, as it takes a more grounded look at what actually happened across the year.
At this point, I’m largely desensitised to how quickly things change in the AI news cycle. Reading through articles like this, though, what strikes me most is just how much I’ve forgotten. Some of that is fads coming and going, others are systems that have already been surpassed.
Last January, the release of DeepSeek R1, an open-source Chinese reasoning model, completely upset the status quo when revealed it could match the performance of the best proprietary models of the time, all while being trained for a fraction of the price. It caught the big AI companies off-guard and left them scrambling to maintain their lead. Now it’s just one of many models, all on similar levels and each a valid choice for building systems. It really drives home for me how AI is developing at a truly astounding pace.
AI predictions
Alongside the look backs, the end of a year also brings another type of article: predictions for what comes next. I like this set from Foundation Capital in particular. Their predictions for 2025 turned out to be fairly accurate, and the new ones feel grounded in reality, not too obvious and not buying into absurd hype.
One idea from this that I found particularly interesting is the use of AI agents to track how decisions are made as part of a wider process. Rather than just producing an outcome, the idea is that agents could help capture the steps and reasoning used along the way. To me, this feels exciting because it could make complex decisions easier to understand after the fact and help surface useful knowledge that would otherwise stay buried in messy systems and workflows. In the long term, this could shift organisations away from static processes toward systems that actively and automatically learn from their own decisions.
Research perspective
As noted in the Ars Technica article, 2025 was a year of incremental improvements for AI, and I generally feel that 2026 will bring more of the same steady progress. AI agents are currently making huge impacts in the world of coding, and I think this impact will spread out to other domains as people figure out how best to implement them into their workflows. There was a lot of discussion last year around being in an AI bubble, and while I feel that something has to give eventually (although I am definitely not an economist), AI will continue to be a useful tool. It’s here to stay.
- Chris Judd, Senior Data Scientist
A new trend in AI healthcare
In many ways there is a strong overlap in the technology requirements between health and HR. Both deal with messy, erratic human data and both deal with very sensitive data where an accidental (or intentional) leak of the data could be disastrous for both the individual and company concerned.
Software monitoring health, especially from a wellbeing perspective, has been around for a while now, but with the development of powerful new large language models devoted specifically for medical specialisms, we could be entering an entirely new era of software.
OpenAI and Anthropic have both recently revealed early adopter versions of healthcare AI models – Chat GPT Health and Claude for Healthcare respectively. ChatGPT health is aimed at consumer level (i.e. the patient), whereas Claude for Healthcare is targeting healthcare professionals. These are currently targeted at the non-European market, with Claude specifically incorporating insurance-based data, presumably for use with the American market, in particular.
Both systems are specifically trained models that incorporate medical records and device data as well as assurances around data usage, sharing and training - you chose what data what you want to share and they don’t train on or share that data.
Chat GPT Health
The new OpenAI model is intended for use by the individual as a way to support the care they receive from medical professionals. It will help them interpret medical jargon, analyse their wearable data and where applicable help navigate medical insurance complexities. This is available in most non-European countries (presumably those countries without GDPR), although there is a waiting list you have to sign up to at the moment.
Claude for Healthcare
As mentioned above, the Claude model is currently targeted at the healthcare industry itself and you can see in the presentation video how wide ranging they intend this to be – it’s a long video, but if you skip to the 29 minute mark it gives an example usage case around a doctor being able to summarise medical notes. But they key area is that it will link up lots of personal healthcare data, from sources such as device data, insurance data and company health data. This appears to be targeted at helping reduce administration and bureaucracy as much as using it to be a diagnostic tool, although they did mention in the above video using it as a tool of “second opinion”.
In the video, they highlight that Anthropic are investing a great deal in this area compared to others and this could be interesting to see how successful they are because Anthropic are currently seen as the leader in coding tools even though their exposure to the general public isn’t as great as OpenAI’s Chat-GPT and Google’s Gemini models.
Research perspective
As with most things AI, I find the research exciting and terrifying in equal measure. It’s easy to see the benefit, especially from a diagnosis and bureaucracy reduction perspective. Doctors are massively overworked and to have a tool that gives them more opportunity to focus on their patient can only be a good thing. Not only will the doctor have more quality time to spend with patients, but the AI could work as an early triage system that further reduces the healthcare system burden with an associated cost saving and improved appointment system.
But on the other hand, especially when insurance companies get involved, you can see the slippery slope to persecution and healthcare denial. If an individual refuses to share their data when joining a company’s insurance scheme will they be barred from accessing healthcare? What’s the guarantee that the most sensitive of data sets won’t find their way into nefarious hands? If a doctor or patient follows an incorrect diagnosis, then who is responsible? Where there is a vested interest in the drug companies to make money through this system, why is it in their interest to share data that deters patients from using their drug prescriptions?
In one of the original (American) articles that reported the new tools, it suggested that GDPR could be a hindrance to European systems adopting and benefiting from this technology. From my perspective, as much as I see advantages of the tools, I welcome the protection we receive from the regulations and that coupled with the public healthcare system that most European countries have privilege to access, I think in the long run it could be more beneficial to our systems than to the insurance-based systems.
What does it mean for MHR?
As already discussed, software monitoring wellbeing has been around in various forms for a while now, but this feels like a step up with the amalgamation of personal, medical and device data, wrapped in the latest LLM magic. It feels like we’ve stepped far beyond what has been attempted before.
Both People First and iTrent have integrated with lots of different systems over the years, be it for CV parsing, calendar interactions or document storage. Healthcare systems could be the next big thing to connect with. What if the next time you book a sickness absence, not only does it inform your manager and HR, but also offers to book a doctor's appointment or suggests from your medical history that you should really start to get a seasonal flu jab. A few weeks after you’ve recovered it then reminds you of this and offers to book you in for one!
Something else these systems demonstrate, beyond the obvious impact (negative or positive) on employee health, is that nothing is off the table when it comes to AI tools. It shows we can’t underestimate the level of complexity that can be thrown at an LLM and it shows we’re ripping up any traditional mechanisms for how we interact with systems. But more than anything, it shows that individuals have increasing power to build and work with systems in ways that just weren’t feasible only a couple of years ago. The next article highlights this really well...
- Neil Stenton, Research Engineering Manager
LLMs in personal projects
Continuing the theme of health, this blog post gives an example of using LLMs to work with personal health data. The author initially describes his ongoing thyroid disease, which intermittently causes him severe health problems. He wondered if there were some signal he could monitor that could predict the onset of these problems.
With a great deal of assistance from Claude (an LLM from Anthropic), the author uses his own Apple Watch data to build a machine-learning model to track his symptoms. Together they iterated on the problem, with both the author and AI suggesting potential avenues to explore and worked towards a final solution. In the end they created a system that can predict when his symptoms are likely to worsen, giving him an early warning he can act on with the potential to dramatically improve his quality of life.
This post is a nice contrast to the discussions above. Instead of handing data over to a third party, this shows it is possible to use today’s tools to work directly with your own data, sidestepping many of the privacy concerns that come with large, centralised health platforms.
Research perspective
What I really liked about this post is how it shows the potential of LLMs to help individuals help themselves on problems that directly affect them. Unlike in most examples, the AI here isn’t doing everything independently. It’s a partner you can bounce ideas off, helping combine your own knowledge and experience with the technical expertise the LLM brings.
It really brings home to me the power of LLMs as force-multipliers for individuals. Imagine a world where instead of generic tools optimised for the average user, people could build their own small, highly personal systems to meet their specific needs. These solutions would be more relevant, more adaptable, and actually useful in day-to-day life, because they’re built by the people who understand the problem best. This is the future AI companies have confidently been claiming will happen when agents fully take off, and if it does play out as they say, it could fundamentally change how people work, reason about problems, and apply technology in their own lives.
There are many problems I can see with such a world, however. In the article, the author acknowledges that he already has some experience with machine learning, which likely helped him both attempt this in the first place and recognise when the AI was heading in the wrong direction. From our own experimentation, we’ve seen how often LLMs are confidently wrong, something that gets much harder to spot when playing around in areas we are less familiar with. Without a solid understanding, it’s easy to end up trusting outputs that look good, but rest on shaky assumptions. The risk isn’t just failed experiments, but a growing number of personal tools that feel insightful while quietly reinforcing incorrect or even harmful conclusions.