Apple’s new AI-powered notification system is inadvertently lending credibility to scam messages by summarizing and prioritizing them alongside legitimate communications on iPhones and Mac computers.
Key developments: Apple’s “Apple Intelligence” update, rolled out to Australian users in late 2023, includes features that summarize notifications and prioritize certain alerts using artificial intelligence.
- The system condenses multiple notifications into single messages and flags what it determines to be urgent communications
- This AI-powered feature is being applied to both legitimate messages and scam attempts without discrimination
- Apple has already faced criticism for incorrectly summarizing BBC headlines, including a notable error regarding a CEO’s alleged killer
Real-world implications: Security experts and users are reporting instances where the AI system is inadvertently legitimizing fraudulent communications.
- One user reported receiving a falsified tax notice that was flagged as priority by Apple’s system
- The AI summarizes scam messages in clear, professional language, potentially making them more convincing to recipients
- Users on social media platforms have documented numerous cases of the system prioritizing obvious scam attempts
Expert analysis: Security and AI specialists warn that this feature could increase vulnerability to scams, particularly in Australia where consumers lost $2.7 billion to fraud in 2023.
- Professor Daswin De Silva of La Trobe University warns that people may place excessive trust in Apple’s AI-powered summaries
- The summarization process can strip away telltale signs of fraudulent messages, making it harder to distinguish legitimate communications from scams
- Experts criticize the rapid deployment of AI features without adequate testing and gradual implementation
Corporate response: Apple has acknowledged some issues with the system and is working on modifications.
- The company has committed to updating the feature to clearly indicate when text is an AI-generated summary
- Apple has not directly addressed concerns about the system’s handling of fraudulent messages
- The company’s initial examples focused on benign use cases like summarizing group chats and flight notifications
Looking ahead: Future implications for AI-assisted communication: The situation highlights a critical challenge in AI development – the balance between convenience and security. As AI systems become more integrated into daily communications, their potential to inadvertently amplify sophisticated fraud attempts may require more robust safety measures and user education before deployment.
Apple’s new AI feature rewords scam messages to make them look more legit