Connected vehicles create hundreds of API connection points, which are essentially “doorways” that let two computer programs talk to each other. 

Driving the news: Third-party vendors build most of these applications, not manufacturers. And as a result, dealers often lack visibility into where cybersecurity vulnerabilities lie, especially as AI becomes more ubiquitous in the industry.

"API is a huge threat landscape at this point. There's no avoiding it with the connected vehicle," Joshua Poster, director of intelligence and analysis at Auto-ISAC, told attendees at its 9th annual cybersecurity conference in Washington, D.C. "It's hard to get your mind around where all your API points are.”

However, most automotive organizations are behind. They're deploying AI tools across operations but lack basic governance frameworks to manage the risks.

"Bad actors are using AI already," Nicholas Panos, senior cybersecurity advisor at Google Cloud explained, during the panel moderated by Upstream. "The only way to beat a bad actor that's using AI is to be a good actor using better AI."

For context: Automotive organizations rushed to adopt AI over the past two years to help improve operational efficiencies and the overall customer experience. 

But Sachin Singh, managing director at Deloitte, explained that AI adoption is often driven by the chase for high-ROI wins, while security isn’t typically the top priority.

"Companies are realizing cybersecurity is actually going to be even more important, not just because of the threat landscape changing, but the amount of data we now collect from vehicles, connected vehicles," Singh said.

Why it matters: Dealers integrating AI tools for inventory management, customer communication, and service operations are expanding their attack surface with every vendor relationship. Without visibility into API security across these systems, they can't truly know their exposure. And without governance frameworks, they can't manage risk as AI capabilities expand.

The fix: From Singh’s perspective, a good rule of thumb is to first build a seven-pillar framework for “trustworthy AI” that represents fairness, accountability, responsibility, transparency, security, privacy, and reliability. 

Security is only one pillar and most organizations don't have any framework at all, he added.

How it works: Establish a cross-functional team representing all seven pillars, not just security and IT. 

  • Adopt an existing framework rather than building from scratch. 

  • Enhance software development lifecycles to account for AI security. 

  • Implement training, monitoring, and auditable processes.

On top of that: “If you are building a project plan, please make sure there is enough time to cleanse your data, test against your data," Singh said. "I cannot say it enough."

OUTSMART THE CAR MARKET IN 5 MINUTES A WEEK

Get insights trusted by 55,000+ car dealers. Free, fast, and built for automotive leaders.

Between the lines: Get governance boards in place before making major AI investments, Panos said. Develop responsible use policies for employees and ensure data quality matches expected outcomes.

"CISOs are kind of bristling at the perception—your division is where fun goes to die," he said. "We have to shift that mindset from being a blocker to being a critical enabler."

And compliance is complicating this development. The ways in which customer data is collected, aggregated, and analyzed vary from jurisdiction to jurisdiction. 

So, scalability across markets requires upfront compliance planning.

What dealers can do now: Poster recommended dealers narrow the scope of expectations for any AI tool before adoption. 

"Having some review process of the tool itself, the back end of that tool, the ability to have confidence that it's not going to open up your threat landscape, rather help you close it," he said.

That means validating AI tools before deployment. Does the vendor have security documentation? Can they explain how the tool accesses dealer systems? What data does it collect and where does it go? Those questions should have answers before a tool goes live.

  • Singh also emphasized data quality as the foundation for any AI security capability. 

  • Without clean data, AI tools can't perform their intended function—whether that's threat detection or vulnerability analysis. 

Bottom line: AI adoption in automotive is accelerating so fast, that the corresponding security infrastructure is struggling to keep up, and it’s making dealers vulnerable.

A quick word from our partner

Reclaim Wrench Time with Pencilwrench

You don’t hire technicians to be writers. Yet their pay — and your reimbursement — depend on clear, accurate, OEM-specific repair stories. Pencilwrench by StoneEagle makes it simple:

  • Guided, point-and-click workflows

  • OEM-specific cause-and-correction phrasing

  • Warranty and “Problem Not Found” claims supported

  • Recall documentation that stands up to review

Save time, prevent denials, and keep revenue flowing.

Join the conversation

or to participate