iPads, MakerEd and More in Education
1.2M views | +15 today
Follow
iPads, MakerEd and More  in Education
News, reviews, resources for AI, iTech, MakerEd, Coding and more ....
Curated by John Evans
Your new post is loading...
Your new post is loading...
Rescooped by John Evans from iGeneration - 21st Century Education (Pedagogy & Digital Innovation)
Scoop.it!

The Leader's Guide to Unconscious Bias - interview with authors via Scott Miller - FranklinCovey

The Leader's Guide to Unconscious Bias - interview with authors via Scott Miller - FranklinCovey | iPads, MakerEd and More  in Education | Scoop.it
Join the co-authors of FranklinCovey’s newest book, The Leader’s Guide to Unconscious Bias: How to Reframe Bias, Cultivate Connection, and Create High-Performing Teams.

Via Tom D'Amico (@TDOttawa)
No comment yet.
Scooped by John Evans
Scoop.it!

Algorithms, the Illusion of Neutrality – Towards Data Science

Algorithms, the Illusion of Neutrality – Towards Data Science | iPads, MakerEd and More  in Education | Scoop.it
Bias is a fundamental human characteristic. We are all biased, by our very nature, and every day we make countless decisions based on our gut feelings. We all have preconceived ideas, prejudices, and opinions. And that is fine, as long as we recognize it and take responsibility for it.

The fundamental promise of AI, besides the dramatic increase of data processing power and business efficiency, is to help reduce the conscious or unconscious bias of human decisions. At the end of the day, this is what we expect from algorithms, isn’t it? Objectivity, mathematical detachment rather than fuzzy emotions, fact-based rather than instinctive decisions. Algorithms are supposed to alert people to their cognitive blind spots, so they can make more accurate, unbiased decisions.

At least that’s the theory…
No comment yet.
Scooped by John Evans
Scoop.it!

Introducing ‘Designing responsibly with AI’ – UXDesign

Introducing ‘Designing responsibly with AI’ – UXDesign | iPads, MakerEd and More  in Education | Scoop.it
Part 1 of the series ‘Designing responsibly with AI’ to help design teams understand the ethical challenges when working with AI and have better control over the possible consequences of their design on people and society.
No comment yet.
Scooped by John Evans
Scoop.it!

7 Tips for Teaching Students How to Recognize Bias in an Era of Fake News - We Are Teachers

7 Tips for Teaching Students How to Recognize Bias in an Era of Fake News - We Are Teachers | iPads, MakerEd and More  in Education | Scoop.it
When students are learning about research topics and current events, they must also learn about how perspective and bias may affect the information they are reading. Teaching these lessons explicitly is critical in this era of “fake” news.

The following tips and activities are designed to help students understand the choices that journalists make that may affect how readers interpret a story.
No comment yet.
Scooped by John Evans
Scoop.it!

Everyone Has Invisible Bias. This Lesson Shows Students How to Recognize It. | EdSurge News

Everyone Has Invisible Bias. This Lesson Shows Students How to Recognize It. | EdSurge News | iPads, MakerEd and More  in Education | Scoop.it
Perhaps the biggest challenge we face when accessing information is confronting our biases before we are able to unpack the opinions and insights of other people. Often these biases are unconscious or implicit, meaning we might not even be aware we have them.

But these implicit biases have real implications, and educators are no less immune than students.
No comment yet.
Scooped by John Evans
Scoop.it!

Why facial recognition's racial bias problem is so hard to crack - CNET

Why facial recognition's racial bias problem is so hard to crack - CNET | iPads, MakerEd and More  in Education | Scoop.it

""Jmmy Gomez is a California Democrat, a Harvard graduate and one of the few Hispanic lawmakers serving in the US House of Representatives. But to Amazon's facial recognition system, he looks like a potential criminal.

 

Gomez was one of 28 US Congress members falsely matched with mugshots of people who've been arrested, as part of a test the American Civil Liberties Union ran last year of the Amazon Rekognition program.  Nearly 40 percent of the false matches by Amazon's tool, which is being used by police, involved people of color.

 

The findings reinforce a growing concern among civil liberties groups, lawmakers and even some tech firms that facial recognitioncould harm minorities as the technology becomes more mainstream. A form of the tech is already being used on iPhones and Android phones, and police, retailers, airports and schools are slowly coming around to it too. But studies have shown that facial recognition systems have a harder time identifying women and darker-skinnedpeople, which could lead to disastrous false positives."

No comment yet.
Rescooped by John Evans from Computational Tinkering
Scoop.it!

The Algorithms Aren’t Biased, We Are – MIT MEDIA LAB 

The Algorithms Aren’t Biased, We Are – MIT MEDIA LAB  | iPads, MakerEd and More  in Education | Scoop.it
"Excited about using AI to improve your organization’s operations? Curious about the promise of insights and predictions from computer models? I want to warn you about bias and how it can appear in those types of projects, share some illustrative examples, and translate the latest academic research on “algorithmic bias.”"


Via Susan Einhorn
No comment yet.