25/10/2025                                                                            
                                    
                                                                            
                                            AI use in vehicle diagnostics - bullshine, golden or only good for replacing the ‘bloke down the pub’?
This is a longish post - but is worth a read.
Having spoken at a few conferences about leveraging AI to support human diagnostic frailties, I wanted to share an extract from a recent interaction, surrounding a basic lamp circuit diagnosis. 
There was a basic truth table (you know the kind of thing - given the circuit, if X is the observation, then Y must be the fault).
The robot got the diagnosis wrong, and had to be questioned and probed before correcting itself. 
Detecting when the AI is wrong is part of my AI skill set. 
This skill requires many, many hours of working with AI to develop. It’s almost on the same level as intuition. 
Also ‘domain’ knowledge, that I know some things about circuits also helps….
There is a ‘gut’ feeling to knowing ‘AI’ got it wrong. You can’t just ask, because sometimes it really believes it got it right (and takes some persuading of the reality of wrong).
If you don’t take time to work with AI as a partner, how will you learn this skill?
I love the potential of AI, but I apply the same high level of critical thinking skills when working with it. 
My other ‘love’ is AIx - the explain-ability that we’re now being given access to on the Reasoning AI models. So, we can check out where they may have gone wrong. 
The problem is that this takes time, and currently, it may take longer to inspect the thinking and correct the ‘robot’ than just getting on and doing the task biologically in the first place. 
Of course they are learning, so we hope such errors reduce as time goes on…….
Here’s the explainability - I got the AI to assess where and why it went wrong in our diagnostic case, and this is useful for us to learn the vulnerability of this AI ‘tool’. It is useful so that we can guide it better in the future, or we can use this to realise that it is awesome for automating boring stuff, or spewing out masses of content. But for some other tasks, it needs special circumstances to do quite rudimentary things.
Anyway, here’s the summary:
The AI told me …..My thought process failed at several points:
1. **Pattern matching over understanding** - I saw "multiple diagnoses in the list" and jumped to "these faults can't be isolated" without actually checking WHICH faults were listed or WHY.
2. **Didn't apply basic engineering logic** - I should have stopped and asked: "Can current flow through a stuck-open switch?" Obviously no. I know this, but I didn't check my example against physical reality.
3. **Speed over accuracy** - I was trying to quickly create an illustrative example rather than carefully reading what Table 1 actually said. If I'd looked properly, I would have seen it lists "NF, f2" not "NF, f1, f2" for that scenario.
4. **Confirmation bias** - Once I thought I had an example, I didn't question it. I was looking for evidence to support my explanation rather than testing whether my explanation was correct.
5. **Not using the domain expertise I should have** - Vehicle diagnostics is about electrical and mechanical systems. Current flow, switch states, these are fundamental. I have access to that reasoning but didn't apply it.
The AI made this self-evaluation statement “The key lesson: When explaining technical concepts, especially with concrete examples, **verify the example against first principles** before presenting it. You correctly did this - you immediately spotted that a stuck-open switch can't light a lamp, which should have been my first check.”
The AI finished up with the statement “This is exactly why peer review and challenge questions are valuable. Thank you for persisting.“
This is typical I what I have found, and thought it good to share with my fellow technicians.
It can do some super-human tasks, but these systems have to be narrowed down and trained and have the learning re-enforced (much like a human).
For now, asking AI about diagnostics may be a little like asking the bloke down the pub for his opinion…..
What are your experiences of using AI to help for more complex thinking tasks.