March 20, 2016
Technology is especially hard to enforce because legislators frequently have a hard time understanding it. Given its longevity, and propensity for prompt proliferous progress, the automotive industry is of the best acquainted with congressional ignorance. An excellent contemporary illustration is the Apple encryption case. Lawmakers have a fundamental misunderstanding of the crux of that case. Removing all other nuances, how can meaningful legislation be expected to be made on that case, without congress grasping the gravity of their requests. If Apple complies with legislators requests to create technology to break into the iPhone in question, they will (to borrow a phrase) both be creating and opening Pandora’s box. Anyone will be able to break into any device and steal any information from it at any time. Unfortunately, legislators not comprehending a topic is not an acceptable excuse. In this day and age, not being informed is indicative of apathy. Thanks to the internet, even as little as an hour can yield at least a rudimentary apprehension of just about any subject. Supplement that with a vast array of experts able (and enthusiastic) to further explain, and the insult expands. Fortunately, for once, congress seems to be trying to get ahead of things with autonomous auto technology. Not only are they already meeting with the major players of the aforementioned field, 30 states already have some sort of regulatory framework for autonomous autos prepared. Recently, California riled Google by altering their laws to include an addendum to their established laws for self-driving cars. Namely, they stipulated that car must have a driver (or at least someone ready to take the wheel) at all times. Honestly, in the nascent test phase we are still in, requiring a licensed driver behind the wheel of an autonomous auto, is not only reasonable but prudent. Additionally, it helps legislators appease all those professional drivers terrified of losing their jobs to technology overnight. Google is so confident in their autonomous autos, not only do they want them on the road unsupervised as fast as possible, but they are also willing to assume all culpability. The company already has millions of autonomous miles under their belt, and in those years of experience they have proven that their automatons are much safer than irrational commuters. To an extent I agree with Google. Autonomous Autos are already much safer than human drivers, but that is because they are held to a much higher standard. Any idiot can get a license, but most people are terrified of sharing the road with a Hackermobile. Actually, most people are right to be, but not for the reason they are. Give the Hackermobile a month and it will drive literally just as well as its owner would, so if the driver can get a license the Hackermobile should be able to drive. The only real outstanding issue here pertains to both the Apple example, and the Hackermobile. What if an autonomous auto gets hacked? As things stand that is a very real possibility, and none of the autonomous auto heavyweights have an answer. Google is against even a mandatory minimum level cybersecurity encryption, saying, “the best action is to take no action.”
Tags: Autonomous Auto , Google , Hacking , Technology
Hello {{User.FirstName}} {{User.LastName}}.
You are logged in with email {{User.Email}}.