Big Data and The Social Dilemma

Computer code on a screen

Written by: Christine Taylhardat 

Photo by: Markus Spiske on Unsplash

Apps are convenient and entertaining, yet we rarely reflect on the consequences of using them. Media platforms like Facebook, Twitter, and Instagram are so much more than just places to connect with friends and share content about our lives. Most of us are probably aware that the price of these “free” platforms is us, or rather the data that makes us up: we agree to its collection and use when we click “agree” at the end of those really long Terms of Service agreements that the average user doesn’t take the time to read completely. Yet do we really understand what this means? The personal information we relinquish is collected as ‘big data’, which refers to large, diverse datasets which can be collected, stored, and analyzed. While big data can have certain benefits, it also opens a realm of privacy and ethical issues.

How can this data be used?

For starters, many “free” platforms grant users access in return for advertisements. Consumer data is collected from the information users provide and from their behaviour and interactions on the platform. This data can then be used to improve the platform’s own service, or to help tailor ads more specifically to target audiences. All of this may be done without the user’s awareness.

Taking Facebook’s data policy as an example, I will outline different ways that user information is collected, how that information is used, and how it is shared. Facebook alone can collect data that you have provided them, as well as info that your friends have shared about you, data collected from your device, and information collected by third parties that use Facebook tools. Your data is used for personalizing your Facebook feed (including ads), and to create demographic information for businesses. Information is shared in different ways, including to advertisers looking to effectively display targeted ads.

The way privacy is currently approached is problematic for users. For example, even if the user reads through the whole Terms and Conditions document, they still may not be properly informed about the use of their data, since the length and use of convoluted language make these documents hard for the average person to understand. Because of the technical loopholes, the possibility of abuse exists. One concern about user privacy involves the extent of analysis that can be done with supposedly anonymous data points. With the amount of data collected about one person through different platforms, it can be easier to connect the dots and get a fuller picture of the individual.

The rapid technological advancements that our society faces make it difficult to control the collection and use of big data, and our enjoyment of these platforms hinders our ability to critically analyze them. All of this is changing how our society functions.

To understand just how badly big data can be manipulated, it helps to look at a high profile (and very recent) example: the Cambridge Analytica/Facebook scandal.

Cambridge Analytica (CA) was a data analytics firm that used Facebook data to influence political campaigns - such as the UK’s Brexit and US President Donald Trump’s election - through ad targeting. A former CA employee-turned whistleblower called the company’s operations a “grossly unethical experiment” and a “propaganda machine.” According to the whistleblower, Christopher Wylie, CA got access to data from 230 million Americans.

So, how did this happen?

Facebook’s data policy allows for the collection of user data for academic purposes. So, when CA was looking for a way to get data on the target populations, they reached out to Cambridge University, and Professor Aleksandr Kogan created an app that tested people’s personality type. This app asked the user for access to their Facebook profile, and mined data from the user’s friends, despite no consent to share their information. Facebook allowed Kogan to collect this data for research only, but he sold it to CA. This data was then analyzed to connect people’s personality and Facebook information with their political leanings; this helped CA develop algorithms that were then used to target their ads. CA also used this information to create powerful ads with content that people would connect with, and thus be more likely to be influenced.

While Facebook was not initially aware of what the massive amounts of data collected were used for, the company was blamed for lacking proper measures to protect their users’ information and a lack of enforcement of their privacy measures. Rather than ensuring proper protection of their users’ information, Facebook held third-parties responsible for protecting the data that they collected.

The UK, US, and Canada conducted investigations following the scandal. The US Federal Trade Commission (FTC) fined Facebook $5 billion for privacy violations, after finding that Facebook was deceptive to their users and “undermined consumers’ choices.”

This one case has larger societal implications. That is not to say that without Facebook or CA the outcomes of either the Brexit or the US election would have been different. Yet CA’s involvement in elections shows that big data has the capability to undermine the principles of democracy. CA’s tactics were different from those of average political campaigns as they specialized in microtargeting based on an individual’s user data, not just a general group profiling of a candidate’s voter base.

As big data collection continues to increase in growth and value, it continues to have the potential for misuse, and our existing regulations may not be sufficient to control companies like Facebook. In Canada, the Privacy Commissioner’s Office investigated of the allegations Facebook faced related to CA and provided Facebook with a set of recommendations to help protect their users’ data in the future. The recommendations were rejected by Facebook, and, since the Privacy Commissioner’s Office does not have the power to enforce them, our information is still likely to be given to and used by other parties without us being properly informed of how it will be used.

The issue of privacy and big data transcends Facebook and CA. Solutions are likely to be complex given that the problem is extensive, and a small number of companies have large amounts of power. Systemic solutions will likely require a balance between consumer rights and business interests as big data is growing in importance to the economy.

As an individual, you can try advocating for your data rights. After all, it is your information, and you deserve to know where it’s going and how it’s being used. Likewise, if you find it difficult to imagine giving up your social platforms, there are still some things to keep in mind while using them:

  • Be informed: Read the terms and conditions, privacy policies etc. Even if the language is ambiguous, it can help give you a basic idea about what information you’re giving the company access to and how that information will be used
  • Be careful of personal information you make public or share on media platforms.
  • Don’t get your news from social media and beware of sharing fake/unreliable news. Remember to check your sources!

Big data has great potential for good and for harm. It is quickly growing, bringing us into uncharted territories where the law is concerned. Many companies’ privacy practices are questionable, and they are not likely to effectively police themselves when their company structure is profit-driven. This means it is important to come up with new laws and/ or regulations, and quickly, or high-profile scandals like Cambridge Analytica may happen again. Citizen’s personal data must be afforded more protection if we are to avoid another case with the potential of not only invading our privacy but also damaging the structures of our society.


Published on