In a brazen misuse of AI technology, a Hong Kong-based web advertising firm has relaunched classic tech blogs like The Unofficial Apple Weblog (TUAW) and iLounge, populating them with AI-generated content falsely attributed to the original writers.
Unethical use of AI and stolen identities: Web Orange Limited, the company behind the relaunch, claims to have purchased the domain names and brand identities but not the original content. They have used AI to reword old articles and generate new ones, attaching the names of former writers without their consent:
- Christina Warren, a former TUAW writer now at GitHub, discovered her name on fake content and threatened legal action, prompting the company to change the byline to a likely fabricated name.
- Author bios on the relaunched sites appear to be generic and possibly AI-generated, accompanied by photos that do not match the real writers and have been used elsewhere without permission.
Exploiting domain authority for profit: The relaunched websites capitalize on the domain names’ lingering value in Google rankings to drive traffic and generate advertising revenue:
- The company behind the scheme, Web Orange Limited, is a web advertising firm seemingly focused on monetizing the domains rather than producing authentic content.
- By using AI to quickly generate articles that mimic the style and topics of the original websites, they can attract visitors and serve ads without the need for human writers or genuine journalism.
Connections to a dubious individual: Initially, the websites named Haider Ali Khan, an Australian residing in Dubai, as the owner of Web Orange Limited. However, mentions of his name were removed after the unethical practices came to light:
- Khan’s personal website, which has since been taken offline, described him as a “cyber security analyst” and “advocate for web security” who recently began investing in and managing technology news blogs.
- The sudden disappearance of his name and website details raises questions about his involvement and the company’s reaction to being exposed.
Broader implications for online trust and authenticity: This incident highlights the potential for AI to be used in ways that erode trust in online content and threaten the livelihoods of professional writers:
- As AI language models become more advanced, it may become increasingly difficult for readers to distinguish between genuine, human-written articles and those generated by algorithms.
- The ease with which bad actors can create fake content and attribute it to real people undermines the credibility of online journalism and makes it harder for legitimate writers to build and maintain their reputations.
- This kind of deception also contributes to the spread of misinformation, as AI-generated articles can be used to manipulate public opinion or push false narratives, further eroding trust in media outlets.
The misuse of AI to impersonate writers and exploit their reputations for profit represents a disturbing new frontier in online deception. As AI technologies continue to advance, it will be increasingly important for lawmakers, tech companies, and media organizations to develop robust safeguards against such unethical practices and to educate the public about the potential risks. Without decisive action, incidents like this could become more common, chipping away at the foundations of trust that underpin the digital information ecosystem.
Shady company relaunches popular old tech blogs, steals writers’ identities