Below is my response to a Medium Post on AI trends. The
YouTube link is from an ACLU article on how YouTube used Machine Learning to
remove a bunch of "hate" YouTubes. While I don't think the Supreme
Court has yet weigh in on the Free Speech nature of the Internet and what’s considered
"hate" speech (I could be wrong here so don't quote me). As a writer,
I become concerned because while I don't care for "hate" speech the
truth is it’s a protected form of speech within very specific confines. Maybe
the Internet cannot support those confines, or it can. It is for those who
develop technologies to respect those confines.
----------------------------------------------------
The YouTube link is disturbing in a lot of ways. I honestly don’t think Machine Learning is up to the task. Either it will let somethings through it shouldn’t, or it’s going to filter away things it shouldn’t. Worse is what if it’s hacked to filter in certain directions, or what if the learning algorithms just start leaning a certain direction on their own?
To be honest, I personally disagree with a lot that is available on YouTube and the internet as a whole. But as a Nation founded on free speech, even stuff I disagree with will have to be present in a way that is accessible to adults. Otherwise we run the risk of Governmentally controlled speech and NO freedom of the press.