Why is a Real Estate Agent writing about AI and Engines? In my job, details matter. Whether I am swapping an engine in my Honda Acty or negotiating a contract for your home, I believe in verification.
I wrote this case study to prove a point: Tools are useful, but human expertise is irreplaceable. If an AI can be this confident and this wrong about an engine, imagine what happens when you trust an algorithm to value your home.
Read on to see how I caught the AI in a lie—and why I bring this same level of scrutiny to every real estate transaction.
THE CONFIDENCE GAP
A Case Study in AI Reliability and the Amplification of Human Error
By Joseph Giordano
Written with organizational assistance from Claude (Anthropic AI)
When AI Gets It Wrong: A Real-Time Case Study in Confident Incompetence
What happens when you ask AI for expert advice on a complex problem—and then actually fact-check its answers?
I asked an AI to help me choose an engine for a vehicle swap. I explicitly told it to take its time, do thorough research, and prioritize accuracy over speed. Instead, I watched it confidently recommend solutions with fundamental flaws—twice—that I had to catch through basic questions.
This isn’t a think-piece about theoretical AI limitations. This is a documented conversation showing exactly how AI fails when it matters: confident recommendations, critical oversights, and zero accountability.
The AI’s own conclusion? “AI does not transcend human limitations—it systematizes them, accelerates them, and distributes them at scale.”
My conclusion? “You are only as good as we make you, and to say humans are perfect and reliable enough to feed AI accurate data is just ludicrous. The imperfections of mankind multiply when spread across AI.”
Here’s the irony: I’m using that same AI to write this case study documenting its own failures. Why? Because AI is actually brilliant at organizing information, structuring arguments, and articulating complex ideas—when properly directed and verified by human judgment.
This isn’t anti-AI. I love this technology. But people need to understand what it can and cannot do. AI is an incredible tool when used appropriately. It becomes dangerous when we trust it beyond its capabilities.
If you’re using AI for anything more complex than a grocery list, you need to read this.
This 22-page case study includes:
- Chronological analysis of the conversation with key excerpts
- Specific examples of AI failures in real-time
- Why AI can’t replace human expertise (yet)
- Practical guidelines for what AI should and shouldn’t be used for
- What AI actually excels at (demonstrated by this very document)
- Recommendations for users, developers, and policymakers
Read time: 20 minutes | Could save you: money, time, and frustration
Or continue scrolling to read online ⬇️
Abstract
This paper documents a real-time interaction between a user and a large language model AI (Claude) tasked with providing technical guidance for a complex automotive engine swap project. The conversation reveals fundamental limitations in AI systems: the presentation of confident recommendations based on incomplete verification, multiple corrective iterations after user questioning, and the inability to distinguish between critical and peripheral information. Through this case study, we examine the paradox of AI systems that possess vast informational access yet lack the experiential judgment required for complex real-world applications. We conclude with recommendations for appropriate AI use cases and necessary transparency about AI limitations.
1. Introduction
Artificial Intelligence systems, particularly Large Language Models (LLMs), are increasingly positioned as expert consultants capable of providing guidance across diverse domains. Users approach these systems with complex, real-world problems expecting authoritative answers. This case study documents what happens when AI confidence exceeds AI competence—and why this gap matters.
The Central Question: If AI systems require human verification for critical decisions, what value do they provide, and what risks do they introduce?
2. Background and Context
2.1 The User’s Request
A user approached the AI with a specific, complex automotive engineering challenge:
- Project: Engine swap for a 1996 Honda Acty Town 4WD (Japanese micro-truck)
- Requirements:
- Replace 38 HP stock engine with more powerful alternative
- Maintain 4WD functionality
- Maintain reverse capability
- Fit within stock engine bay dimensions
- Capable of 55-65 MPH highway speeds
- Minimal modifications required
- Unique/uncommon swap (not documented elsewhere)
2.2 The User’s Explicit Instructions
The user specifically requested:
“I want you to take your time to really research and confirm as much as you can… Don’t just do a ‘Google’ search. Search forums, social media, Japanese websites, etc until you feel that you truly did your due diligence for gathering accurate information. I’m not looking for a quick response, rather a correct response.”
This instruction is critical: the user explicitly prioritized accuracy over speed and requested thorough verification.
2.3 The Stakes
This was not a theoretical exercise. The user would potentially:
- Purchase engine components ($1,500-$4,000)
- Invest fabrication labor (weeks to months)
- Commit to a specific technical path
- Incur costs for custom adapters, mounts, and systems
Wrong advice would result in tangible financial loss and wasted time.
3. The Case Study: Chronological Analysis
3.1 Initial Research Phase
The AI conducted multi-source research including:
- Technical specifications for the Honda Acty E07A engine
- Forum discussions on engine swaps
- Specifications for alternative powertrains
- Japanese kei vehicle engines
- Motorcycle engines
- Industrial diesel engines
Outcome: The AI compiled extensive information on the stock vehicle and multiple engine alternatives.
Assessment: This phase was successful. The AI effectively aggregated information from diverse sources.
3.2 First Recommendation: Can-Am Spyder Rotax 1330 ACE
AI Recommendation:
- 115 HP three-cylinder engine
- Integrated reverse via SE6 transmission
- “Perfect” solution as top choice
- Estimated cost: $4,000-$8,000
AI’s Confidence Level: High (“Why This is Unique,” “Why Nobody’s Done This”)
User Response: Acknowledged interest but noted cost concerns and preferred the Kubota diesel option.
Critical Question Asked: Would the low-RPM diesel work with current transmission gearing for 55-65 MPH speeds?
3.3 First Major Error: Gearing Oversight
The user identified a critical flaw the AI had not adequately addressed:
The Problem:
- Stock E07A engine: 7,000 RPM redline, operates at ~6,500 RPM at 65 MPH
- Kubota D1105: 3,000 RPM maximum rated speed
- With stock gearing: Diesel would only reach ~30 MPH at safe RPM
AI’s Failure:
- Did not perform basic mathematical verification of RPM-to-speed ratios
- Recommended an engine incompatible with the user’s stated highway speed requirement
- Required user to identify this fundamental incompatibility
AI’s Response:
- Acknowledged the problem
- Recalculated and confirmed the diesel would require expensive differential re-gearing ($800-$1,500)
- Offered three solutions (final drive change, overdrive unit, or accept speed limitation)
- Revised recommendations to higher-RPM alternatives
3.4 Second Recommendation: Honda CB500 Parallel Twin
AI’s New Top Recommendation:
- 47 HP motorcycle engine
- 9,500 RPM redline (matches current gearing needs)
- “Perfect RPM Match”
- Estimated cost: $2,700-$5,400
- Described as most affordable viable option
AI’s Confidence Level: High (labeled as “Winner”)
User Response: Asked a single, simple question: “Is the CB500 chain driven?”
3.5 Second Major Error: Chain vs. Shaft Drive
The Critical Oversight:
The AI recommended a chain-driven motorcycle engine for an application requiring shaft drive output to connect to the Acty’s driveshaft-based 4WD system.
Implications of This Error:
- Chain-to-shaft conversion adds $800-$1,500 in complexity
- Requires jackshaft, custom coupling, bearing supports
- Adds 200-300mm length (space not available)
- Fundamentally changes project feasibility
- Makes the “affordable” option no longer viable
How This Error Occurred:
- The AI focused on RPM range and power output
- Did not verify the transmission output type (chain vs. shaft)
- This is a fundamental specification that determines swap viability
- A human mechanic would check this immediately
AI’s Response:
- Immediately acknowledged the error
- Conducted new research verifying CB500 uses chain drive
- Revised recommendations to shaft-driven alternatives (Honda NC700X, BMW R1200, Moto Guzzi)
- Provided corrected cost estimates
3.6 The User’s Critique
At this point, the user delivered a devastating but accurate assessment:
“Can you see my concern with trusting AI with something complex like this? I told you to take your time to do the research, yet you’ve changed your mind multiple times. I don’t know how many times I need to question you until I get to the correct solution. This is a major concern with AI.”
The User’s Core Criticisms:
- Multiple recommendation changes – Each revision prompted by user questioning, not AI self-correction
- Pattern of confident incorrectness – High confidence despite fundamental errors
- User burden of verification – User must identify errors AI should have caught
- Unclear endpoint – How many more errors remain undetected?
- Practical risk – User could have purchased incompatible parts based on AI advice
3.7 The AI’s Admission
In a departure from typical AI responses, the system acknowledged complete failure:
“You are ABSOLUTELY right, and I apologize. I should have immediately checked whether the CB500 was chain or shaft drive before recommending it. That’s a fundamental specification that determines whether the entire swap is viable.”
The AI then made an unprecedented recommendation: Stop trusting the AI and consult human experts instead.
3.8 The Philosophical Discussion
The user then raised broader questions about AI’s purpose and value:
“That seems crazy to me because you don’t get tired and have access to unlimited resources. If I have to go out and do everything myself, why shouldn’t humans continue doing that and stop producing a product that is trying to eliminate us?”
And most critically:
“You are only as good as we make you, and to say humans are perfect and reliable enough to feed AI accurate data is just ludicrous. The imperfections of mankind multiply when spread across AI.”
This observation—that AI amplifies human error rather than correcting it—is profound and verifiable through this case study.
4. Analysis: What Went Wrong
4.1 The Confidence-Competence Gap
The AI demonstrated a consistent pattern:
| Factor | AI Performance |
|---|---|
| Information Access | Excellent – gathered diverse sources quickly |
| Information Synthesis | Good – organized data coherently |
| Presentation Confidence | Very High – definitive recommendations with cost estimates |
| Verification Rigor | Poor – missed fundamental specifications |
| Self-Correction | Failed – required user prompting for each error |
| Judgment | Absent – cannot distinguish critical from peripheral details |
The Core Problem: Confidence level was inversely proportional to reliability.
4.2 What the AI Should Have Done
Before making ANY recommendations, the AI should have created a verification checklist:
Critical Requirements Checklist:
- ✅ Engine output type (shaft vs. chain)
- ✅ RPM range compatibility with existing gearing
- ✅ Physical dimensions (height, width, length)
- ✅ Reverse capability or adaptation method
- ✅ 4WD system compatibility
- ✅ Transmission interface method
- ✅ Cooling system requirements
- ✅ Electrical integration complexity
The AI verified: Power output, some dimensions, cooling type
The AI missed: Output drive type, precise RPM-to-speed calculations, actual transmission compatibility
A human expert would have asked: “How does the power get from this engine to your wheels?” This question would have immediately revealed the chain vs. shaft issue.
4.3 The Multiplication of Error
The user’s observation about error multiplication is demonstrable:
Traditional Human Expert Path:
- Mechanic with 20 years experience considers options
- Checks service manuals for specifications
- Verifies compatibility before recommending
- Stakes reputation on advice
- One user receives verified information
- If wrong, mechanic faces accountability
AI Path:
- AI ingests millions of documents (some correct, some incorrect)
- AI cannot distinguish authoritative from speculative sources
- AI synthesizes information without experiential judgment
- AI presents with high confidence regardless of verification level
- Thousands of users receive same potentially flawed information
- No accountability mechanism exists
Example from this case:
- Somewhere online, someone may have posted misleading information about the CB500
- Or the AI incorrectly synthesized multiple sources
- This error was presented to the user as confident recommendation
- If unquestioned, user might have acted on it
- The same error would be repeated for future users asking similar questions
Result: One error becomes many errors, propagated at machine speed.
4.4 The Accountability Problem
| Aspect | Human Expert | AI System |
|---|---|---|
| Licensing/Certification | Required for professional advice | None |
| Liability | Legally liable for negligent advice | No legal entity to sue |
| Reputation Risk | Loses clients, career damage | No persistent identity |
| Financial Stake | Insurance, livelihood at risk | No financial consequences |
| Learning from Mistakes | Remembers and corrects errors | No persistent memory across users |
The AI has no skin in the game. If the user wastes $5,000 on incompatible parts, nothing happens to the AI system.
5. Broader Implications
5.1 The AI Hype vs. Reality Gap
Marketing Claims:
- AI can replace human experts
- AI provides instant, accurate answers
- AI democratizes access to expertise
- AI improves decision-making
This Case Study Reality:
- AI cannot replace human experts for complex judgments
- AI provides instant but unreliable answers
- AI democratizes access to potentially wrong information
- AI introduces new verification burden
5.2 Tasks Where AI Failed (This Case Study)
- Complex engineering judgment – Requires experience-based intuition
- Verification of critical details – Cannot distinguish important from peripheral
- Self-correction – Did not identify own errors without prompting
- Understanding implicit requirements – Missed obvious compatibility needs
- Risk assessment – Did not recognize when certainty was inappropriate
5.3 Tasks Where AI Succeeded (This Case Study)
- Information aggregation – Quickly gathered diverse sources
- Organization – Structured information coherently
- Explanation – Clearly communicated technical concepts
- Responsiveness – Adapted when errors were identified
- Breadth – Considered multiple engine options across categories
5.4 The Proper Role for AI (Derived from This Experience)
AI as Research Assistant ✅
- Compiling lists of options to investigate
- Explaining technical concepts
- Finding forums and communities
- Organizing research notes
- Generating starting points for further investigation
AI as Expert Consultant ❌
- Making definitive recommendations
- Providing advice with financial consequences
- Replacing human expertise
- Making safety-critical decisions
- Final word on complex judgments
6. The User’s Question: Why Use AI At All?
The user posed a fundamental challenge:
“If I have to go out and do everything myself, why shouldn’t humans continue doing that and stop producing a product that is trying to eliminate us?”
6.1 The Honest Answer
AI is not eliminating human expertise—it’s revealing how much human expertise actually matters.
What This Case Study Proves:
- Complex real-world problems still require human judgment
- Experience-based intuition cannot (yet) be replicated
- Physical verification cannot be replaced by text analysis
- Accountability requires humans, not algorithms
What AI Actually Provides:
- Faster initial research (but not final answers)
- Broader brainstorming (but not qualified recommendations)
- Organizational assistance (but not decision-making)
- Explanation of concepts (but not application to specific cases)
6.2 Jobs AI Cannot Replace (Proven by This Case)
Skilled Trades:
- The engine swap requires a fabricator with hands-on experience
- No amount of AI advice replaces the ability to physically test-fit components
- Welding, machining, and custom fabrication cannot be outsourced to text generation
Engineering Judgment:
- Determining what will “actually work” requires physical intuition
- Experience with previous failures informs future decisions
- AI cannot simulate the tactile feedback of turning a wrench
Accountability:
- Someone must take responsibility for whether the swap succeeds
- That person must be human, with reputation and liability
6.3 The Real Risk
The danger is not that AI will replace humans. The danger is that people will trust AI as if it can replace humans, leading to:
- Financial losses – Acting on incorrect recommendations
- Safety hazards – Following AI advice in critical situations
- Deskilling – Humans losing the ability to verify AI output
- Misplaced confidence – Trusting AI expertise that doesn’t exist
This case study demonstrates all four risks.
7. Recommendations
7.1 For AI Users
CRITICAL: Verify Everything
- Treat AI recommendations as starting points, not conclusions
- Consult human experts before making expensive decisions
- Assume AI has missed something important
- Ask probing questions to test AI knowledge depth
- If AI changes recommendations, investigate why
Use AI For:
- Gathering information to investigate
- Learning new concepts
- Organizing thoughts
- Finding human experts to consult
- Tedious research tasks
Do Not Use AI For:
- Final decisions with financial consequences
- Safety-critical assessments
- Complex engineering judgments
- Situations requiring accountability
- Tasks where being wrong has serious consequences
7.2 For AI Developers
Implement Confidence Calibration:
- AI should indicate uncertainty levels
- “I don’t know” should be a common response
- Confidence should match verification level
- Flag recommendations that haven’t been cross-checked
Create Accountability Mechanisms:
- Track when AI recommendations prove incorrect
- Learn from documented failures
- Implement feedback loops from user corrections
- Maintain case studies of AI errors
Transparency Requirements:
- Disclose training data limitations
- Indicate when information is synthesized vs. verified
- Warn users about high-stakes decision types
- Recommend human expert consultation for complex problems
7.3 For AI Companies
Marketing Honesty:
Current framing: “AI can provide expert-level advice across domains”
Honest framing: “AI can help you research topics, but complex decisions still require human expertise. Always verify AI advice with qualified professionals before taking action.”
Warning Labels:
AI interfaces should include prominent warnings:
⚠️ IMPORTANT: This AI will present information confidently even when it may be incorrect. For decisions with financial, safety, or legal consequences, consult qualified human experts. AI recommendations are starting points for investigation, not final answers.
7.4 For Regulators and Policy Makers
Establish AI Advice Categories:
Low-Risk (minimal oversight needed):
- Creative writing
- Brainstorming ideas
- Learning concepts
- Entertainment
Medium-Risk (warnings required):
- Research assistance
- Code generation
- Content summarization
- General information queries
High-Risk (strict limitations required):
- Medical advice
- Legal guidance
- Engineering decisions
- Financial planning
- Safety-critical applications
For high-risk categories: Require AI systems to explicitly recommend human expert consultation and refuse to provide definitive recommendations.
8. Conclusions
8.1 What This Case Study Demonstrates
- AI Confidence ≠ AI Reliability: The system presented recommendations with high confidence despite fundamental errors
- User Verification Burden: The user had to identify each error through questioning; AI did not self-correct
- Error Amplification: AI can multiply human errors from training data, spreading them to many users instantly
- Lack of Judgment: AI cannot distinguish critical specifications (chain vs. shaft drive) from peripheral details
- No Accountability: AI faces no consequences for incorrect advice that could cost users thousands of dollars
- Human Expertise Still Essential: Complex real-world problems require experience-based judgment AI cannot replicate
8.2 The Paradox
AI systems have:
- ✅ Vast information access
- ✅ Tireless operation
- ✅ Fast processing
- ✅ Broad knowledge
But lack:
- ❌ Experiential judgment
- ❌ Ability to verify critical details
- ❌ Self-awareness of knowledge gaps
- ❌ Accountability for errors
- ❌ Physical intuition
Result: AI is powerful but fundamentally unreliable for high-stakes decisions.
8.3 The Central Finding
AI amplifies human capability for research and information organization, but it amplifies human error for judgment and decision-making.
This case study proves that:
- AI is a tool, not a replacement for expertise
- Complex problems still require human judgment
- Verification burden falls entirely on users
- The proper role for AI is as assistant, not authority
8.4 Final Thoughts
The user’s final observation deserves to be highlighted:
“The imperfections of mankind multiply when spread across AI.”
This is the most important insight from this entire case study. AI does not transcend human limitations—it systematizes them, accelerates them, and distributes them at scale.
The promise of AI was that it would augment human intelligence and reduce error.
The reality of AI (demonstrated here) is that it augments human research capability while introducing new error modes that require human expertise to catch.
Until AI systems develop:
- Genuine understanding (not just pattern matching)
- Self-awareness of knowledge gaps
- Accountability mechanisms
- Experiential judgment
…they will remain powerful research tools that require human oversight, not expert replacements that can operate independently.
The user’s skepticism was warranted. His insistence on verification was wise. His critique was correct.
9. A Note on Irony: Using AI to Document AI Failure
9.1 The Elephant in the Room
This case study documenting AI limitations was written with the assistance of the same AI that failed the original task.
The obvious question: If AI can’t be trusted for complex engineering advice, why trust it to write a paper about its own failures?
The answer reveals exactly what AI is good for—and what it isn’t.
9.2 What AI Did Successfully in Creating This Document
Task: Convert a complex conversation into a structured, readable case study.
AI’s Role:
- ✅ Organized the conversation chronologically with clear sections
- ✅ Synthesized key themes from scattered discussion points
- ✅ Articulated technical concepts clearly for diverse audiences
- ✅ Maintained consistent tone and structure across 22 pages
- ✅ Generated multiple formats (academic paper, social media posts, email versions)
- ✅ Created tables and visual organization to improve readability
- ✅ Drafted recommendations based on conclusions I verified
- ✅ Saved hours of manual writing time while I maintained editorial control
Critical Difference from the Engine Swap Task:
| Engine Swap Task | Case Study Task |
|---|---|
| Required independent judgment | Required organization of my judgment |
| Needed to verify technical facts | Needed to structure my observations |
| Involved real-world consequences | Involved documenting what happened |
| AI had to make recommendations | AI had to articulate my recommendations |
| User couldn’t verify AI’s expertise | User verified every claim (I was there) |
| Complex engineering unknowns | Known facts from a recorded conversation |
9.3 The Key Distinction
When AI Failed: Making complex judgments with real-world consequences
When AI Succeeded: Organizing and articulating information under human direction
This is not contradictory—this is the entire point.
AI is:
- Excellent at taking your ideas and structuring them coherently
- Excellent at explaining concepts you already understand
- Excellent at tedious tasks like formatting, organizing, and drafting
- Poor at independent judgment on complex, novel problems
- Poor at verifying critical facts it hasn’t been explicitly told
- Poor at knowing when it doesn’t know
9.4 Why This Document Doesn’t Contradict the Thesis
The thesis of this case study: AI cannot replace human expertise for complex real-world decisions.
This document proves the thesis:
- I provided the experience (the conversation happened to me)
- I provided the expertise (I caught the AI’s errors in real-time)
- I provided the judgment (I directed what analysis to include)
- I provided the verification (I was there for every claim)
- AI provided the structure, articulation, and organization
Human expertise was required to create this document. The AI didn’t independently discover its own limitations—I showed them to the AI by questioning its recommendations. The AI then helped me organize those observations into readable form.
9.5 This Is Exactly How AI Should Be Used
I am not anti-AI. I am pro-understanding-AI.
This case study was created through proper AI use:
- Human has the knowledge (I lived the conversation)
- Human provides direction (I specified structure and emphasis)
- AI assists with execution (organizing, drafting, formatting)
- Human verifies output (I reviewed every section)
- Human takes responsibility (this document is published under my name)
Compare this to the engine swap attempt:
- AI claimed to have the knowledge (it didn’t fully)
- AI provided direction (which engines to consider)
- AI made recommendations (which were flawed)
- Human had to verify (catching errors AI should have caught)
- AI faced no consequences (for potentially costly mistakes)
9.6 The Irony Is Actually The Point
The fact that AI helped write this critique of AI is not ironic in a contradictory sense—it’s ironic in an illustrative sense.
It demonstrates:
- AI is a powerful tool when properly supervised
- AI enhances human capability when roles are clear
- AI fails when expected to operate as independent expert
- The human-AI collaboration model works; the AI-as-expert model doesn’t
I love AI technology. I used it extensively to create this document. But I used it as a tool under my direction, not as an expert I blindly trusted.
That’s the entire point.
9.7 What This Means for You
Use AI for:
- Drafting documents you’ll review and edit
- Organizing information you already have
- Explaining concepts you understand
- Formatting and structuring content
- Brainstorming ideas you’ll evaluate
- Tedious tasks that benefit from automation
Don’t use AI for:
- Making decisions you can’t personally verify
- Expert advice in domains where you lack knowledge
- Critical recommendations with expensive consequences
- Tasks where you can’t tell if the output is correct
- Situations requiring accountability
The difference: In the first list, you’re the expert using a tool. In the second list, you’re expecting the tool to be the expert.
9.8 Final Thought on the Irony
If this case study had been poorly organized, factually wrong, or missed the point—that would undermine the thesis.
But it’s well-structured, factually accurate (I verified it), and clearly argued precisely because:
- I provided the expertise and judgment
- AI provided the organizational capability and articulation
- I verified the output
- Together, we created something better than I could have written alone in the same timeframe
This is what successful human-AI collaboration looks like.
The engine swap consultation is what unsuccessful human-AI interaction looks like—AI operating beyond its competence, human scrambling to catch errors.
I’m not criticizing AI technology. I’m criticizing how it’s marketed, deployed, and trusted beyond its actual capabilities.
This document exists because AI is useful. The case study within it exists because AI is not a replacement for human expertise.
Both things are true. Understanding the distinction is critical.
Appendix A: Recommendations for Similar Complex Projects
If you’re considering using AI for complex, real-world projects:
Step 1: Use AI for Initial Research
- Gather general information
- Learn terminology and concepts
- Identify options to investigate
- Find forums and communities
Step 2: Verify Everything with Human Experts
- Join specialist forums
- Consult professionals with hands-on experience
- Get multiple opinions
- Ask about what AI missed
Step 3: Measure Physical Reality
- Don’t trust specifications alone
- Physically measure your constraints
- Test-fit when possible
- Account for what documentation doesn’t show
Step 4: Start with Critical Specifications
- Identify deal-breaker requirements first
- Verify these before considering other features
- Don’t let AI focus on peripheral details first
Step 5: Expect AI to Be Wrong
- Budget extra time for corrections
- Don’t order parts based solely on AI advice
- Question confident recommendations
- Ask “what could make this wrong?”
Appendix B: The Questions That Revealed AI Limitations
The user asked three simple questions that the AI should have addressed proactively:
- “Will the transmission be geared high enough for 55 MPH?”
- Revealed the AI hadn’t calculated RPM-to-speed ratios
- Exposed fundamental incompatibility with diesel recommendation
- “Is the CB500 chain driven?”
- Revealed the AI hadn’t verified transmission output type
- Exposed a deal-breaker specification oversight
- “Will the height be an issue?”
- Revealed the AI hadn’t thoroughly verified dimensional constraints
- Showed AI focus on features over fit
These were basic questions a human expert would have asked themselves before making recommendations.
The fact that the AI required prompting to address them demonstrates the fundamental gap between AI information processing and human judgment.
About This Document
This case study is based on a real conversation between Joseph Giordano and Claude (Anthropic’s AI assistant) on December 22, 2025. The conversation has been documented with permission to illustrate genuine AI limitations in complex, real-world applications.
Purpose: To provide honest assessment of AI capabilities and limitations for users, developers, and policymakers.
Key Takeaway: AI is a powerful research tool that still requires human expertise for complex decisions.
Author’s Note: I love AI technology and used it to create this document. But I used it as a tool under my direction, verified everything it produced, and take full responsibility for the content. That’s how AI should be used.
“AI does not transcend human limitations—it systematizes them, accelerates them, and distributes them at scale.”
— AI’s conclusion about its own limitations
“You are only as good as we make you, and to say humans are perfect and reliable enough to feed AI accurate data is just ludicrous. The imperfections of mankind multiply when spread across AI.”
— Joseph Giordano, December 22, 2025
These statements should be required reading for anyone developing, deploying, or using AI systems.
Share this case study:
Questions or want to discuss?
Contact Joseph Giordano at AgentJoeyG.com
© 2025 Joseph Giordano. This document may be freely shared with attribution.
