Editorial Quiet Quitting and Other Negative Impacts of Machine Learning and Media

Editorial Quiet Quitting and Other Negative Impacts of Machine Learning and Media
Television Production Technology Concept with Video Wall

Quiet quitting, work from home, etc.

I know most of the world lost its sanity over the past few years.  However, the leverage of greed and laziness against our general success appears to be making headway.  I do not remember a time in USA history where it was recommended – and supported – by self-declared social media experts, general media, and others that you should put out the minimum effort and actively see how little you can do (quiet quit).  I suppose that some of this stems from the same people that said that no-one would want to do face-to-face conferences (conference attendance, even new conferences, are at all-time highs) now that they can do it via the web or cloud.  Yes, you guessed it, the companies that took advantage of the pandemic and developed web applications are the ones pushing this direction.

While keeping some conferences virtual, especially certain international conferences, by and large human beings are social creatures.  We thrive on personal interaction regardless of how many people perpetually stare at their phones.  There is a happy medium, however.  It has become more effective to have virtual meetings, introductions, and even some demonstrations especially when transportation is still an issue or the meeting would only be 30-60 minutes.

This last week I was at a face-to-face conference and got to experience what I’ve experienced when I’ve traveled to job sites.  Hotels still aren’t back up to cleaning rooms each day, restaurants have strange hours and are unable to accommodate all of the available seats, and, in the case of this week, some restaurants aren’t even able to open during their published hours (if at all).  This is on top of the lack of people in other areas, usually starting positions.  In discussions with business leaders and managers the general response is that they are able to hire people for a day or two but they either don’t show up or state that working isn’t good for their ‘mental health.’  In some complaints, the new workers are having trouble working a full 8 hour day.

But, according to media, work-from-home software developers, and adversaries, somehow people are more effective and will work more than that same 8 hour day if they are home.  I suppose it’s possible, I mean, after all, things did not make much sense since the first few months of 2020.  The reports of people working from home and taking on 2-3 full-time jobs but working less than 8 hours per day have quietly disappeared, as well.  I tested the algorithms on one of the social media platforms before writing this piece to see what would happen, and, even though controversial (which usually results in high views) the posting, which was related to this subject, was one of the least viewed I’ve made in a long time – until I drew attention to it.

We have learned through social media leadership, very directly, that their algorithms can be adjusted to determine what is viewed.  The challenges with these platforms is becoming more newsworthy, albeit slowly.  As one retail advertising social media giant put it to me back in 2016 when we had a troll post on a 2XL Powerlifting page that the retail social media company had set up – they would take the false post down if we signed a two-year advertising contract.  If, however, we attempted to sue them for libel, well they would destroy our business (it’s literally written in their FAQ page) – we had not mentioned action, they apparently have had to deal with the outcry enough that they bring it up in their sales presentation.  I was wondering why so many restaurants had their name on their doors and websites when they use these strong-arm tactics, but that also was one of the conditions of advertising.

As I mentioned in a past newsletter, people are just as programmable as machine learning systems.  When indoctrinated or conditioned by constant information with a specific message then that becomes their reality.  We have been warned about this for some time ranging from the book ‘1984’ to a rather interesting movie called ‘Brazil,’ and other philosophical literature.  Just as those in statistics know – information can be presented in such a way and with just enough accuracy to spin the results in any way you choose.  It has become enough of a concern that standards bodies are addressing it such as the IEEE 7000 series – “IEEE Standard Model Process for Addressing Ethical Concerns During System Design.”  IEEE has also approached the problem in machine learning and AI through ‘IEEE CertifAIed,’ which is a certification program for addressing ethics of Autonomous Intelligent Systems.

Basically, it has been recognized as enough of a concern that the release of ethics standards by standards bodies, and others, has been on the rise since 2019.  This also includes standardization on how personal information is utilized.  For instance, most of the rest of the globe follows a similar set of regulations to Europe’s GDPR (General Data Protection Regulation) in which you can opt out of any organization using personal identifying information.  At this time no such strict regulation or set of rules governs in the USA.  However, even in Europe, big tech avoids much of the brunt of the regulation as they can produce the materials – because they are used in the USA – but a company that uses those tools in the affected countries, even incidentally, can be held accountable.   **”The German Court ruled that the use of Google Fonts without prior consent is a violation of Europe’s GDPR because Google Fonts exposes the user’s IP address.”** (notices to webmasters came out this week).

One of the objects within these tech industries is tailoring messaging by individual.  When all that you see echoes your personal beliefs, it gives the appearance that a majority of people believe the same thing that you do.  Those other people must be crackpots (and they are… right?).  The second part of this approach is that it keeps you coming back for more and more.  In the end, it generates a narrative that started small and then became part of your belief system.  For instance, how easy was it to turn over human interaction to online ‘bots’ then replace humans when you go to a restaurant (OK, the fast food ones, so far… oh wait… some actual sit-downs too) and now grocery stores.  ‘Smart phones’ were the easiest to accomplish dehumanization – no matter what you need there is an app for that.  And why are those apps free anyways?  Yep – they are gathering information on your habits.  From one standpoint, they can then bombard you with targeted marketing – you know, when you mention within earshot of your phone or Alexa that you like Black Rifle Coffee, or Deathwish Coffee, and suddenly you receive email or it shows up on your social media account with a coupon for either.  (or, apparently, when you are typing a post).

As we explore the challenges we see with workforce and related attitude we can note the changes in the messages through social and general media.  Additionally, bots and other adversarial systems utilize machine learning and AI to explore and insert harmful information to perpetuate conflict and damage industry.