There has also been a lot of backlash to how AI developers have used the web as their training ground for their models. My experience has been that a lot of IT people embrace the saying: "Better to ask forgiveness, than ask permission." Whether that is rolling out new patches to an application without letting people know, or switching infrastructure behind the scenes, or letting customers know you were hacked months or years after the breach happened. Its a too common occurrence.
So I have been looking into Artificial Intelligence to see how things are progressing. Some of its reassuring. Microsoft is advocating what they term responsible AI with six main principles to guide AI development. Adobe has a generative AI platform that strictly spells out in the usage terms that you can't use any content created by their tools to train AI/ML models. MIT researchers in 2023 released a tool called PhotoGuard to block AI engines from using protected files in their models. And the IPTC (International Press Telecommunications Council) in partnership with the PLUS coalition adopted new metadata standards to indicate that pictures were not available to be used for training AI models.
These initiatives are reassuring, but they also are all voluntary. Nothing prevents a developer from bypassing these safeguards and doing what they want. So until there are better controls in place, if that is even possible, there will likely be people pushing the envelope from the legal but maybe not ethical side.
Comments
Post a Comment