Ian Carroll and Sam Curry reported on 9 July that they had managed to hack into the backend of an AI chatbot platform on McHire.com, a site that US McDonald’s franchisees use during the hiring process.
The researchers accessed a Paradox.ai account using the password ‘123456’, which allowed them to access databases that held McHire user’s chats with an AI chatbot called Olivia.
The researchers were able to view the information of five candidates, including their names, email addresses, phone numbers and addresses.
“The McHire administration interface for restaurant owners accepted the default credentials 123456:123456, and an insecure direct object reference (IDOR) on an internal API allowed us to access any contacts and chats we wanted,” Carroll wrote on his website.
“Together they allowed us and anyone else with a McHire account and access to any inbox to retrieve the personal data of more than 64 million applicants.”
Carroll and Curry then began applying for a job at their local McDonald’s, during which they were asked to complete a personality test. After completing the test, their application appeared to be stuck waiting for human review.
“We immediately began disclosure of this issue once we realized the potential impact,” Carroll continued.
“Unfortunately, no disclosure contacts were publicly available and we had to resort to emailing random people. The Paradox.ai security page just says that we do not have to worry about security!”
The Paradox.ai team eventually engaged with the researchers and emphasised that safeguarding candidate and client data was their top priority.
“[Paradox.ai] promptly remediated the vulnerability, and committed to further reviews to identify and close any remaining avenues of exploitation,” concluded Carroll.
McDonald’s said that it told Paradox.ai to fix the issue as soon as it learned of the hack.
“We’re disappointed by this unacceptable vulnerability from a third-party provider, Paradox.ai,” the McDonald’s spokesperson added.
“[The issue] was resolved on the same day it was reported to us. We take our commitment to cyber security seriously and will continue to hold our third-party providers accountable to meeting our standards of data protection.”
Meanwhile, Paradox.ai issued the following statement: "On June 30, two security researchers reached out to the Paradox team about a vulnerability on our system. We promptly investigated the issue and resolved it within a few hours of being notified.
- Importantly, at no point was candidate information leaked online or made publicly available.
- Five candidates in total had information viewed because of this incident, and it was ONLY viewed by the security researchers.
- This incident impacted one organization – no other Paradox clients were impacted.
“Using a legacy password, the researchers logged into a Paradox test account related to a single Paradox client instance. We’ve updated our password security standards since the account was created, but this test account’s password was never updated. Once logged into the test account, the researchers identified an API endpoint vulnerability that allowed them to access information related to chat interactions in the affected client instance.
“Unfortunately, none of our penetration tests previously identified the issue. The majority of the chat interaction records were not tied to a candidate in the system and did not include candidate personal information. However, to validate their findings, the researchers pulled down seven chat interaction records, five of which were for U.S.-based candidates that included names, email addresses, phone numbers and IP addresses. The other two chat interaction records did not include any candidate personal information. Again, once we learned of this issue, the test account credentials were immediately revoked and an endpoint patch was deployed, resolving the issue within a few hours.”
Commenting on the incident, Aditi Gupta, senior manager of professional services consulting at Black Duck, said that it was evidence that “sophisticated AI systems can be compromised” by simple oversights such as weak passwords and a lack of monitoring.
“This incident highlights a systemic issue in how organisations approach security, particularly when implementing AI and automation solutions,” Gupta said.
“The rush to deploy new technology must not compromise basic security principles. Organisations must prioritise fundamental security measures to ensure uncompromised trust in their software, especially for the increasingly regulated, AI-powered world.”
A data breach at Krispy Kreme from November 2024 led to the data of 161,000 customers, current employees, former employees and the families of employees being accessed by hackers.
This story has been updated to make it clear that the researchers did not access the data of 64 million people.