Google is planning to allow users under 13 to use their AI product, Gemini. But like so many others in this space, they are giving the game away:
Like its Workplace for Education accounts, Google says children’s data will not be used to train AI. Still, in the email, Google warns parents that “Gemini can make mistakes,” and kids “may encounter content you don’t want them to see.”
Our legal frameworks have begun to fall apart in the age of AI. Section 230, for instance, is difficult to apply in an age when there is no personal responsibility. If I prompt AI to make me something, and it generates something illegal, how do we regulate that?
There have been some strides but a combination of powerful lobbying and technical incoherence in the federal government is slow.
Google is going to test the limits of COPPA with this one. If you work on the web, you have likely had to make adjustments to sites to ensure compliance with COPPA. It’s a pretty smart law and it protects children using the web from having their data improperly collected. That’s why Google is making this claim. That way we they can say later they tried their best, but AI is just too difficult to control.
We can’t allow companies to pass their accountability over the machines.
Leave a Reply