Where finance and media interesects with reality.

Gabriellina

Strange things are afoot in Wall Street Bets.

Tales of online ghosts manipulating the market are quietly being shared around virtual campfires at midnight.

Chief among them are concerns that Jake Freeman, the elusive 20-year-old multimillionaire investor in BBBY and Mind Medical, may not really exist.

“Dude looks AI Generated.

He looks like what happens when you decide to mash the ‘randomize’ button during character creation.

I will not believe he actually exists until I see a video.”

Many of these Redditors suggest that if Freeman doesn’t exist, the entire narrative which pumped BBBY stock the last couple of months may have come out of a shady hedge fund operation.

Their reluctance to give up on the theory comes despite the fact that Freeman told the FT in late August that “there were conspiracy theories that ranged from I don’t actually exist to I am a front for a Taiwanese amusement park”. Interviews the FT conducted with Freeman’s high school friends further testify to his existence.

The Blind Spot can offer further assurances. We spoke with Freeman directly on Tuesday and have since also seen and verified his identity documents.

“I mean I know I’m real. I think, at least,” Freeman told us. He added that the claims were “sort of dejecting, and I guess a form of what people these days call gaslighting”.

Despite this, the speculation he does not exist persists.

Other Redditors on a related subreddit are now upset that such conspiracies give the community a bad name. One thread was titled: “Stop this Jake Freeman is fake shit. You’re making us look like a conspiracies-driven cult”. 

Redditors’ concerns may indeed seem crazy. But while Freeman is definitely real, the idea fake people are causing mayhem online should not necessarily be taken lightly.

Why Developing a Fake Online Profile is a Bit Like Ageing a Fine Wine

The feasibility of creating a fake persona, or fake data full stop, has never been easier.

Let’s give you some training on how to create the basic layers of a fake person online.

  • Step 1: Head over to thispersondoesnotexist.com to acquire some faces, with zero friction from paywalls or registration barriers.
  • Step 2: Create two protonmail accounts, as these do not require an assigned phone number.
  • Step 3: Use these to create a Facebook, Instagram, and Twitter handle – one of each for both emails.
  • Step 4: Use the accounts from one email to interact with the other (preferably from different IP addresses using VPNs).

Congratulations – you have now created your very first “legend”!

Since social media giants’ algorithms hardly ever detect fake accounts if they have legitimate engagement patterns, you can now use these accounts to carefully scrape Instagram follower data, Facebook location and interaction data, and Twitter likes and retweets. The more posting time you invest in cultivating your profile, the higher the price you can sell a vintage account for on the dark web.

You may assume trolling celebrities, politicians and other influential folk on social media is about the full extent of what you can do with such a profile.

Oh, ye of little faith. There is far more opportunity for mischief than that. One of the most popular illicit uses of fake profiles is in generating fraudulent financial documents through the manipulation of corporate registration services.

Take the UK as an example.

To create a company on the Companies House register, all you need is to fill in a form and provide a printed photocopy of your passport. In case you need it, here’s the download link to photoshop.

Financial expert Graham Barrow has been shouting about the possible abuse on Companies House for years on his podcast The Dark Money Files. “Criminals (…) can log on to Companies House, create 15 or 20 companies, and in less than 24 hours the incorporation documents will be available in PDF format to download,” Graham told the New Statesman.

Companies House information isn’t linked to HMRC data, meaning “you can file completely different sets of accounts in both places, and no one will check.” 

The results range from the extraordinary to the exotic. Some Anglesey residents one day discovered that a bunch of Hungarian zoos (seven in all) had been registered to their homes.

You can abuse the system super f*cking easily”, said a source we contacted at an investigative consultancy, who preferred not to give their name. The problem goes beyond small-scale dodginess; even large corporates rely on data from Companies House accounts. “(It’s crazy) that people base so many business decisions on Companies House when it can be so easily manipulated”, explained the anonymous source.

Barrow has a good tale about a company called Malaya Pro Industries Limited, which at one point had a share capital of £100,000,000,000,000 – more than the annual economic output of the whole world.

There’s also the issue of virtual addresses, like 20-22 Wenlock Road, London, which hosts a grand total of 43,522 companies.

“Companies House is essentially a photocopier and scanner, with not much if any scrutiny powers,” our investigatory source said.

The good news is that the UK register is at least honest about its inability to verify the accuracy of its files. It’s the first thing you see disclosed (top-left on the image above) when you fire up the search engine.

The new Corporate Transparency Act aims to increase the powers of Companies House to address many of these issues. But for now the problem persists.

Trust Erosion

The barriers to entry for creating fake personas are falling all the time. But guarding against the risk of being manipulated by a deepfake account leads you straight into another; paranoia and detachment from reality. This is most apparent in crypto “dark-market” forums, subreddits, or discords, where every other user seems obsessed with the prospect that one or “A.N. Other” user might secretly be an undercover Fed.

Where might a fake persona strike next? And who might it dupe? Does a cyber-dependent world even need old-fashioned meat-bag secret agents anymore? Firing up an AI to do the work of an undercover agent seems far easier.

For example, Reuters reported on July 15, 2020 that the Jerusalem Post and the Times of Israel had inadvertently hired and published an entirely fake reporter on their pages. To what purpose did the published content serve? Was it a rare one-off incident or something that happens far more frequently? The truth is: nobody can be sure.

Many of those practising “deepfake dark art” don’t want you to ever be sure about anything you see online ever again. Courting confusion, uncertainty and self-doubt is the objective. They know that the modern distributed office structure (which applies to newsrooms as much as anywhere else) has normalised never having to meet your colleagues face-to-face. And that for them is an opportunity to exploit.

Not long ago, the Financial Times was plagued with rumours that it too had failed to do proper due diligence on a contributor after publishing what might have been construed as a purposefully provocative piece by an author who didn’t exist. At the time of publication, a wider search for the officially named author, Shruti Advani, yielded almost no search results. What did come up was the following photo that seemed straight out of uncanny valley (the term techies and academics use to describe the eery feeling there’s something not quite right about a person’s human look or actions, which could indicate they are a robot or synthetic.):

Was Advani a real person or simply a deepfake? Had anyone met her? In the morning editorial conference colleagues behaved coyly, says Izzy. Nobody was quite sure about how the commissioning process had gone on that one.

In the end, it was former FT writer Emma Dunkley, then at The Times (but now confusingly back at the FT again) who eventually tracked Advani down for a sitdown interview — humanising her with a glamourous new photoshoot. Advani has since turned her low-key internet profile into a highly engaged one, Tweeting regularly. If the whole thing was a set-up, it would have been an expensive one to operate, cover up and sustain until now. Too many players would have needed pay-offs to keep their silence for somewhat limited upside.

Advani is real. But, at the same time, so are conmen on ever more grandiose levels.

Fraudulent corporations have been known to make ridiculously out-sized offers to journalists to bury bad news or keep stocks pumped up.

At Wednesday’s showing of Dan McCrum’s new Netflix documentary about his experiences with German fintech fraud Wirecard, SKANDAL!, we learned that FT investigations boss Paul Murphy was offered $10m to stop writing about the company.

The documentary comes out on Friday and is a must-watch, not least because it illustrates just how easy it is to manipulate formal corporate registries like Companies House. Billions of dollars of fake transactions designed to juice the company’s share price were channelled in the scam through nondescript family residences without any of the occupiers’ knowledge.

Given that context, the Advani story is a great example of how the mere prospect that anything online could be a deep fake is enough to sow fear, uncertainty and doubt where there really shouldn’t be any.

The real tragedy of the situation is the fallout on the credibility and reliability of the entire news reporting system, and beyond. As the tale of one of America’s top spies James Jesus Angleton confirms, even the world’s foremost intelligence professionals are not immune. Angleton fell victim to reality distortion and an inability to trust basic facts when he became convinced Russian covert agents had infiltrated all levels of the US intelligence system. He termed the state of hyper uncertaintly and bewilderment he experienced the “wilderness of mirrors”.

A Vicious Circle

What this tells us is that once the virus of fakeability is included in the equation, it’s very hard to escape it.

All you need to do is access the list of deepfake celebrity porn videos to see how realistic such technologies have become.

This creates a bit of a catch-22. “I know if I post a selfie, they’re just gonna say that it’s photoshopped,” admitted Freeman. “Even if there was a video (of me) they’d say ‘oh, deepfake’ (…) like they’ll read into a frame where the camera was out of focus.

A recent phenomenon born out of the paranoia that nothing online can be trusted to be real, is the growing belief that the world might in fact be flat. Outsiders tend to dismiss the burgeoning community of “flat earthers” as unhinged nutters. But from the insider, there’s a distinct rationale at play. They see merely themselves as a community of individuals wise to the mischievous ways of government and other state-related institutions. What started as not trusting that the moon landings ever happened, has thus twisted into a state where almost nothing can be believed unless it happens before one’s own eyes. They’re not necessarily stupid as much as victims of a growing Zeitgeist of mistrust, which can only intensify as simulation technology gets ever more real.

On the other end of the spectrum, however, are those who too readily dismiss the possibility that things are being faked right before their eyes.

Once upon a time, a listed multinational corporation approached an investigations company with a problem. Fake businessmen had contacted their managing directors with supposed acquisition offers. During these fake negotiations, the fake businessmen extracted information regarding the company’s activities in sanctioned countries and published it online to damage their share price.

The MDs, respectable and successful individuals, were stunned and humiliated. These fake persons had faces, email and social media accounts, colleagues, and the like. Distanced from the Zeitgeist, they had fallen prey to their own ignorance.

Trusting anything online too readily is also dangerous.

That ghost stories such as Freeman’s are spreading so freely should be treated as alarming. They indicate the fabric of cohesive society is potentially being unwound. Those affected by these claims sometimes understand this better than anyone.

“With the rise of deepfakes, you can make anyone anything“, Freeman pointed out. This fear of fakeness online is real even for him. The context of uncertainty online means paranoid Redditors “are naturally incredulous” he said. “Whatever feeds into their narrative that the world is out to get them they will promote”.

Thankfully, Freeman does see the brighter side to his quandary. “If they’re going to have a conspiracy about me it might as well be that I don’t exist,” he said. “With these conspiracies, I’ll be able to look back and have a bit of a laugh about it”.

For now, South Park’s Trey Parker and Matt Stone have society’s wider back.

Additional reporting by Izabella Kaminska.

The Daily Blind Spot newsletter

Latest posts

If viewing on a mobile simply tap the QR code

Leave a Reply

Your email address will not be published. Required fields are marked *