
As AI increasingly takes over the work of modern programmers, the cybersecurity world has warned that automated coding tools are sure to introduce a new bounty of hackable bugs into software. When those same vibe-coding tools invite anyone to create applications hosted on the web with a click, however, it turns out the security implications go beyond bugs to a total absence of any security—even, sometimes, for highly sensitive corporate and personal data.
Security researcher Dor Zvi and his team at the cybersecurity firm he cofounded, RedAccess, analyzed thousands of vibe-coded web applications created using the AI software development tools Lovable, Replit, Base44, and Netlify and found more than 5,000 of them that had virtually no security or authentication of any kind. Many of these web apps allowed anyone who merely finds their web URL to access the apps and their data. Others had only trivial barriers to that access, such as requiring that a visitor sign in with any email address. Around 40 percent of the apps exposed sensitive data, Zvi says, including medical information, financial data, corporate presentations, and strategy documents, as well as detailed logs of customer conversations with chatbots.
“The end result is that organizations are actually leaking private data through vibe-coding applications,” says Zvi. “This is one of the biggest events ever where people are exposing corporate or other sensitive information to anyone in the world.”
Zvi says RedAccess’ scouring for vulnerable web apps was surprisingly easy. Lovable, Replit, Base44, and Netlify all allow users to host their web apps on those AI companies’ own domains, rather than the users’. So the researchers used straightforward Google and Bing searches for those AI companies’ domains combined with other search terms to identify thousands of apps that had been vibe coded with the companies’ tools.
Of the 5,000 AI-coded apps that Zvi says were left publicly accessible to anyone who simply typed their URLs into a browser, he found close to 2,000 that, upon closer inspection, seemed to reveal private data: Screenshots of web apps he shared with WIRED—several of which WIRED verified were still online and exposed—showed what appeared to be a hospital’s work assignments with the personally identifiable information of doctors, a company’s detailed ad purchasing information, what appeared to be another firm’s go-to-market strategy presentation, a retailer’s full logs of its chatbot’s conversations with customers, including the customers’ full names and contact information, a shipping firm’s cargo records, and assorted sales and financial records from a variety of other companies. In some cases, Zvi says, he found that the exposed apps would have allowed him to gain administrative privileges over systems and even remove other administrators.
In the case of Lovable, Zvi says he also found numerous examples of phishing sites that impersonated major corporations, including Bank of America, Costco, FedEx, Trader Joe’s, and McDonald’s, that appeared to have been created with the AI coding tool and hosted on Lovable’s domain.
When WIRED asked the four AI coding companies about RedAccess’ findings, Netlify didn’t respond, but the three other companies pushed back on the researchers’ claims and protested that they hadn’t shared enough of their findings or provided enough time for them to respond. (RedAccess says it reached out to the companies on Monday.) But they didn’t deny that the web apps RedAccess found were left exposed.
“From the limited information they shared, [RedAccess’s] core claim appears to be that some users have published apps on the open web that should’ve been private,” Replit’s CEO Amjad Masad wrote in a response post on X. “Replit allows users to choose whether apps are public or private. Public apps being accessible on the internet is expected behavior. Privacy settings can be changed at any time with a single click.”