The rapid rise of Deepseek, a Chinese generative AI platform, this week raised concerns about the US AI dominance of the United States as Americans are increasingly digital services in Chinese possession. With ongoing criticism of alleged safety issues setting up Tiktok’s relationship with China, Deepseek’s own privacy policy confirms that it stores user data on servers in the country.
Meanwhile, security researchers at Wiz have discovered that Deepseek has left a critical database online, with more than 1 million records, including user directions, system logs and API verification signs. As the platform promotes its cheaper R1 reasoning model, security researchers tested 50 well-known jailbreaks against Deepseeek’s chatbot and found the safety protection compared to Western competitors.
Brandon Russell, the 29-year-old co-founder of the Atomwaffen division, a neo-Nazi guerrilla organization, was heard this week about an alleged plot to strike Baltimore’s power grid and cause a racing war. The hearing gives a look at the investigation of federal law enforcement to a disturbing propaganda network with the aim of inspiring mass -of -case events in the US and beyond.
An informal group of West African fraudsters who call themselves the Yahoo boys use AI-generated news anchors to enforce victims and deliver produced news reports that falsely accuse them of crimes. From a wired overview of telegram posts, these scammers are creating many convincing fake news broadcasts to push victims to pay ransom by threatening public humiliation.
That’s not all. Every week we make the security and privacy news that we do not cover in depth ourselves. Click the headings to read the full stories. And stay safe out there.
According to a report by the Wall Street Journal, hacking groups use well -known ties with China, Iran, Russia and North Korea AI -Chatbots such as Google Gemini to help with tasks like writing malicious code and investigate potential attack targets.
While Western officials and safety experts have long warned about the potential of AI for malicious use, the Journal, citing a report from Google, noted that the dozens of hacking groups in more than 20 countries mainly use the platform as A research and productivity instrument – – focuses on efficiency rather than developing sophisticated and new hacking techniques.
For example, Iranian groups used the chatbot to generate phishing content in English, Hebrew and Farsi. China-linked groups used twins for tactical research on technical concepts such as data-out filtration and privilege escalation. In North Korea, hackers used it to set up cover letters for remote technology posts, in support of the regime effort to place spies in technological roles to finance its nuclear program.
This is not the first time foreign burglary groups have been found with chatbots. Last year, Openai revealed that five such groups of chatgpt used in similar ways.
WhatsApp on Friday revealed that nearly 100 journalists and civil society members were targeted by spyware developed by the Israeli firm Paragon Solutions. The company owned by the meta warned individuals affected and said with ‘great confidence’ that at least 90 users were targeted and ‘possibly jeopardized’, according to a statement to The Guardian. WhatsApp did not reveal where the victims were located, including whether there was in the United States.
The attack seems to have used a “zero -click” extraction, which means victims are infected without opening a malicious link or attachment. Once a phone is compromised, the spyware is known as graphite-the full access, including the ability to read end-to-end-encrypted messages sent via programs such as WhatsApp and Signal.