(Begins with an example – Eli is a liberal individual, but
enjoys entertaining the suggestions and comments of his conservative
friends. He went on Facebook to view said
conservative friends’ posts only to find that Facebook had removed their
feedback from his feed.)
INFORMATION AND ACCESSIBILITY
There’s a lot of information being created daily…in social media and in general. There’s a lot that we have to pick through as human beings, and, ultimately, it’s impossible. At the core, there’s one effective method: The Amazon method – If you like this, you’ll like this. Simple algorithm that helps ideas, concepts, and items be grouped together to save you from the exabytes of data bombarding you daily.
But it’s faulty…there’s holes…or are there? Example shown of “If you like The Wizard of Oz”, you’ll like “Silence of the Lambs”. Farther, farther down it gets to “If you like milk, then you like Rush Limbaugh”. Is it right? Your movie preferences, your dietary preferences, sure…but your political preferences based on these things?
When Eli first heard that Google searches were going to be different for different people, he didn’t believe it. Sure enough, though – Two of his friends…white males, same age…googledEgypt . One got more news-based results, the other
more travel based. So, how does Google do it?
70 or so different factors come together (not just being logged into
Gmail or anything like that)…
PERSONALIZATION AND RELEVANCE
Personalization is the way of the walk today. There’s a HUGE data market behind the scenes online (obviously), and said data is being used today is for a personalized experience. Google’s CEO mentioned “It will be very hard for people to watch or consume something without it being tailored for them.” Absolutely true.
We’re surrounded by filters today…they’re deciding what we see and what we don’t see. You’re in your own Filter Bubble. Your personal universe of information is completely different from everyone around you. You don’t control it, like a TV or a magazine – We don’t know exactly who search engines think we are. Because of that, we can’t tell what’s been edited out. It’s a passive experience, too – You don’t even realize it’s happening, but it’s there. It’s always there. Creepy.
THREE CHALLENGES
1. The Distortion Problem
We know from Psych research how much people want to be right. People love seeing in media what they already believed to be true. When they see news or stories that contradict what they feel, they get cranky. So, what kind of information do we need to make people happy? You guessed it – What makes them think they’re right.
2. The Psychological Equivalent of Obesity
Study done in Netflix looked at people’s ques and noticing that different movies spent a lot more time in the que. As soon as Iron Man entered, they’d bump it up to the number one spot. A documentary on education (Waiting for Superman), however, took forever and a day. The movies fell into two categories: Want and Should movies. The wants are the fun ones, the shoulds are the Holocaust movies, French art films, etc – The movies you feel, down deep, that you should watch. Impulsive present-minded vs Pensive self-analytical.
The balance (example):
Justin Bieber
Afghanistan
The Oscars
Homelessness
Agreeable Ideas
Challenging Ideas
People Like You
People Different than you
The danger of this personalization to entertain that impulsive, present-minded self is that we’re gorging ourselves on intellectual junk food: What feels good, not what’s necessarily good for us.
3. A Matter of Control
Algorithms are good, but they’re far from perfect. There are things that human editors still do better:
1.) Anticipation
2.) Risk-taking (Root Mean Squared – When searching for restaurants, engines tend to fall more mid-line)
3.) The Whole Picture
4.) Pairing
5.) Social importance
6.) Mind blowingness
7.) Trust (Editorial)
There’s no weight, there’s no bonus points for offering huge “Aha” moments. There’s no personal relationship that will help lead you outside of your trust zone…no “Just trust me” from a computer/algorithm. We’re handing over control to an algorithm…Eric Shmidt mentions that “People want Google to tell them what to do next.” Is that good or is that scary?
INFORMATION AND ACCESSIBILITY
There’s a lot of information being created daily…in social media and in general. There’s a lot that we have to pick through as human beings, and, ultimately, it’s impossible. At the core, there’s one effective method: The Amazon method – If you like this, you’ll like this. Simple algorithm that helps ideas, concepts, and items be grouped together to save you from the exabytes of data bombarding you daily.
But it’s faulty…there’s holes…or are there? Example shown of “If you like The Wizard of Oz”, you’ll like “Silence of the Lambs”. Farther, farther down it gets to “If you like milk, then you like Rush Limbaugh”. Is it right? Your movie preferences, your dietary preferences, sure…but your political preferences based on these things?
When Eli first heard that Google searches were going to be different for different people, he didn’t believe it. Sure enough, though – Two of his friends…white males, same age…googled
PERSONALIZATION AND RELEVANCE
Personalization is the way of the walk today. There’s a HUGE data market behind the scenes online (obviously), and said data is being used today is for a personalized experience. Google’s CEO mentioned “It will be very hard for people to watch or consume something without it being tailored for them.” Absolutely true.
We’re surrounded by filters today…they’re deciding what we see and what we don’t see. You’re in your own Filter Bubble. Your personal universe of information is completely different from everyone around you. You don’t control it, like a TV or a magazine – We don’t know exactly who search engines think we are. Because of that, we can’t tell what’s been edited out. It’s a passive experience, too – You don’t even realize it’s happening, but it’s there. It’s always there. Creepy.
THREE CHALLENGES
1. The Distortion Problem
We know from Psych research how much people want to be right. People love seeing in media what they already believed to be true. When they see news or stories that contradict what they feel, they get cranky. So, what kind of information do we need to make people happy? You guessed it – What makes them think they’re right.
2. The Psychological Equivalent of Obesity
Study done in Netflix looked at people’s ques and noticing that different movies spent a lot more time in the que. As soon as Iron Man entered, they’d bump it up to the number one spot. A documentary on education (Waiting for Superman), however, took forever and a day. The movies fell into two categories: Want and Should movies. The wants are the fun ones, the shoulds are the Holocaust movies, French art films, etc – The movies you feel, down deep, that you should watch. Impulsive present-minded vs Pensive self-analytical.
The balance (example):
Justin Bieber
Afghanistan
The Oscars
Homelessness
Agreeable Ideas
Challenging Ideas
People Like You
People Different than you
The danger of this personalization to entertain that impulsive, present-minded self is that we’re gorging ourselves on intellectual junk food: What feels good, not what’s necessarily good for us.
3. A Matter of Control
Algorithms are good, but they’re far from perfect. There are things that human editors still do better:
1.) Anticipation
2.) Risk-taking (Root Mean Squared – When searching for restaurants, engines tend to fall more mid-line)
3.) The Whole Picture
4.) Pairing
5.) Social importance
6.) Mind blowingness
7.) Trust (Editorial)
There’s no weight, there’s no bonus points for offering huge “Aha” moments. There’s no personal relationship that will help lead you outside of your trust zone…no “Just trust me” from a computer/algorithm. We’re handing over control to an algorithm…Eric Shmidt mentions that “People want Google to tell them what to do next.” Is that good or is that scary?
GOOD OR BAD?
These personalized technologies are great, but they're going to ultimately keep people from what they need to know. Why? Because they're outside the realm of their comfort zone. The algorithm doesn't want to upset the touchy human. How do we make this better, then, and still personalize?
1.) The filterers need to be better. Not just what I like, but what I need.
2.) Filter literacy - Help people understand how the filters work and the filters will work better. Kranzberg's First Law: Technology is not good, or bad, nor is it neutral.
3.) Give students the tools to build better filters.
CONCLUSION
We need the Internet to be that great thing that we believed it would be. It got good, and we got lazy. We got comfortable. Complacent. Against Journey's best advice, we stopped believing. It's time to believe we can make it better and amazing again.
No comments:
Post a Comment