Since I started earning money, each year I buy myself a birthday present. Something a bit more luxurious or expensive than I would normally buy. Like nice sunglasses or a bag. Something unnecessary. This year I took a different tact and gifted myself participation in the Akimbo Bootstrapper's Workshop which started with its first lesson Wednesday. I've been growing increasingly interested in the intersection of commercial and service industries and I figured it would put me so far out of my comfort zone that I'd be bound to learn something of value.
The first lesson prompted me to think about the things I spend money on that I could've made or sourced myself, and why I choose to do so. For me, I seem to be happy to spend money on things if they make me feel good in the moment, help build better relationships, allow me to spend more time on the things I prioritise, or are investments in the long-game.
But I've also been thinking a lot lately about the stuff that we don't pay for. Well, not in money anyway. Because the reality is, if it's a product or service that's free then we're paying for it somehow. I'm talking things like Facebook, Google services, etc. I use these services, and I've always known that I was paying for this through advertising and been happy enough to accept that. And I am still happy enough to accept that, for some things.
After watching documentaries like The Great Hack on Netflix it is hard not to be a little bit concerned about how Cambridge Analytica were able to influence outcomes in the 2016 US election, Brexit and more, with basically no backlash. You might think documentaries can't be trusted, they are a narrative, not necessarily the whole story. I agree you can't just gobble it all up. But then you listen to something like this podcast, an interview with a guy called Roger McNamee who was a mentor to Mark Zuckerburg and even introduced him to Sheryl Sandberg, and you start to understand it a little bit more. What's most scary about is the lack of transparency, and the failure of companies such as Facebook to live up to their civic responsibility.
I want to share one of the things I learned. I'm not being condescending here. I genuinely am a bit aghast that I've only just learned about this stuff. I'm willingly making myself look stupid here (maybe I'm the only one who didn't know this?) because I think it needs to be talked about and acted on.
I've always been pretty willfully naive regarding my use of apps and the internet. When I use things like Facebook, or Google products (which I use a lot, even my nearly 2 year old can say something that sounds like "hey Google") I would take it on face value that while the advertising may be targeted to me, the content was the content. And I don't really mind advertising being targeted to me. I love online shopping. Sure, I thought maybe someone else might see it in a different order or something, but I figured they essentially see the same thing. But this isn't what happens at all. What we see depends on our filter bubble.
A filter bubble is an algorithmic bias that skews or limits the information an individual user sees on the internet. The bias is caused by the weighted algorithms that search engines, social media sites and marketers use to personalize user experience"
This filter bubble thing is really important to get your head around. There's a really good Ted talk by Eli Pariser from 2011 (note, 5 years before the 2016 election!) about the invisible algorithmic editing that happens on the web. Watch this! He explains it way better than I can, and it's only 8 minutes. This isn't some tinfoil conspiracy theorist kind of thing. According to Mark Zuckerburg himself at any given time Facebook has something like 10,000 different versions deployed depending on what the engineers want to try out.
A quick summary is that if I type something into a Google search, then I get the results they want me to see. Perhaps more importantly, I don't know what they edit out. My search results are going to be very different to yours, based on our search history, use of Google maps, what I've clicked on in Chrome, my Gmail...
Similarly, when I see my newsfeed or go into a group in Facebook, I see a selection of posts they have decided are most relevant to me. Same goes with most news sites, or big shopping sites, etc. This might not matter if I'm searching for something frivilous, or if my Facebook feed is mainly just a bunch of photos from people's kids during Book Week. It may even be useful. But it's the lack of transparency that's a problem. Particularly if you're using these products as your source of news and information. There's a chance that your perspective will be skewed and you won't even realise. In the case of Cambridge Analytica, they identified people who met the profile for swing voters and filled their newsfeeds with content that would influence their vote. And it worked. Multiple times. In different countries.
Now think about how this might affect you if you're using these products for professional purposes. Many professional groups host their communities in groups on Facebook or LinkedIn. The PSA's Early Career Pharmacists group is a large, very active group of over 9,000 members on Facebook. GPs Down Under is another closed Facebook group with an active community of more than 6,000 members. But while these groups may be private in terms of not being visible to outsiders, it certainly does not mean that the information shared remains between the members of the group.
The data that we offer these platforms is what their business is all about. We aren't the customers of these platforms, we're the commodity. The customers are the people that pay. They pay for the opportunity for direct marketing, they pay for the rich data that is found in our posts, they pay for the opportunity to recruit people for employment. It doesn't necessarily mean that professional groups shouldn't use these free platforms, but it's something that should be seriously considered and the membership can decide if it's a trade off they're comfortable with. If something is free, then we have to think about how they are getting their money.
In the US there are a few products that offer social network type products that are especially for healthcare professionals, usually doctors. The biggest one is Sermo which is marketed as a place where "doctors can candidly share their true feelings about their profession and lives, and talk 'real world' medicine". They claim to have nearly 400,000 members in the US and is available in seven countries. It started as a pharmacovigilence tool set up after the Vioxx recall. Sound pretty good, right?
Good except for the fact that they too are listening into the conversation and providing that data to pharmaceutical companies and whoever else wants to pay for it. It's hidden in the terms and conditions. It may be presented as though it's just like sitting and chatting in a doctors lounge, but in real life if you discovered microphones hidden in potplants or behind pictures you probably wouldn't feel all that good about it. You might think it doesn't really matter all that much if they use this data to inform sponsored posts on medications, that you know how to evaluate information sources. But it gets back to the whole filter bubble thing. In environments like this, how do you really know what you know to be true is true? In my mind, the trade off of free platform to data mining is doesn't pass the sniff test. I don't trust it. (I'm not the only one, you can read more about this here).
Another similar product in the US is Doximity. Another free platform without advertising, but they openly disclose that they make their money through providing recruitment opportunities. The US is a big market, so that sounds pretty reasonable to me. And they offer some useful tools for collaborative practice too. That might be a trade off that I'd be comfortable with. It's less likely to be viable in Australia though. We need another option.
I think social networks are a valuable resource for healthcare professionals and I think we need a more independent, secure platform to facilitate this. And it's not just me. In their 2012 systematic review, Frances Cunningham and colleagues at Australian Institute of Health Innovation argue
"networks can represent not just the social glue of the professional interaction but the sociological building blocks of effective organisations".
This is the problem that I want to help solve. I am going to provide an independent, secure online platform to host communities of healthcare providers. More than that, this platform is also going to offer a place for members of different communities (and disciplines) to connect with each another. So that we can all work together to build a more sustainable and well functioning healthcare system. Anyone want to help me make it happen? Let me know on Twitter @laurencortis.