top of page

Movies, Comics, and Tech: The Modern Day Battle of Good Vs. Evil

Updated: Jan 25, 2021






After a long Saturday and Sunday of basically doing absolutely nothing, I took the next two days off and spent most of my time watching the Marvel Avengers series. Us computer guys don't get out much... While this is one of my favorite movie series, one of the movies caught my interest more than the others: The Avengers Age of Ultron.  

Now, while this is a super cool, family friendly film, it has given me a few sleepless nights. The reason I have not been able to sleep at night is because of our artificially intelligent friend, Ultron. Why am I scared of a fictional character you ask? Because do we actually know what the real outcome will be from all of this AI stuff? Will AI benefit society, like Jarvis does? Or will it back fire on us like Ultron? If these artificially intelligent systems can make decisions on their own, what sort of decisions will they actually make? And really, there are already some artificially intelligent systems that run our everyday lives (Like Siri...).




For those of you who are not familiar with Ultron, he is a Marvel character (bad guy) who was developed by Tony Stark (Iron Man) and Dr. Bruce Banner (Hulk). He is an artificially intelligent computer system (AI) that was built to be a protective shield around the world. The idea was that this AI would be smart enough to gather information from different inputs, such as the internet, satellites, cell phones etc - and make choices on how to protect the human race.

Those of you familiar with the story know that this is not exactly how it turned out. However, this brings up some interesting discussion points about AI and how it can and will be implemented in our society.

For example, IBM's Watson in some ways it is very similar to Ultron, like how it gathers information from different input streams to make decisions. Watson has competed in Jeopardy. It has been implemented in business to help make decisions (similar to how Jarvis basically runs Stark Industries). Some countries even use it in hospitals to help diagnosis patients. Watson makes it's informed choices by basically processing information that humans give to it without any subjectivity or bias.  


I also think this poses some interesting questions for the developers who are building these artificially intelligent systems. Do we or should we hold them accountable for the negative effects that these systems may produce? Is having good intentions as a developer good enough when building these systems? Or do developers need to look at all potential risks and implement some sort of safe guard to prevent unintentional negative affects?     


Lots of room for discussion here! I think that our Byte Club hosts should dive into this discussion in more detail on the podcast show (hint...hint...).

By the way, if a you are interested in starting a podcast, you can host through Buzzsprout. If you use this link here, you can get a $20 Amazon gift card once after being a member for two months!


Until next time, this is Gary GPU signing off!



I hope that I have filled your minds with curiosity, at least enough to read my next post...



19 views0 comments

Recent Posts

See All

AirDrop

bottom of page