User Tag List

+ Reply to Thread
Page 2 of 2 FirstFirst 12
Results 11 to 17 of 17

Thread: AI and Loss of Liberties

  1. #11
    Points: 1,250, Level: 7
    Level completed: 50%, Points required for next Level: 300
    Overall activity: 33.0%
    Achievements:
    31 days registered1000 Experience Points
    RadioGod's Avatar Senior Member
    Karma
    163
    Join Date
    Apr 2018
    Posts
    469
    Points
    1,250
    Level
    7
    Thanks Given
    264
    Thanked 153x in 122 Posts
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Found another decent link:

    I live with a profound happiness that can only be achieved by being hated by Mr.Veritis

  2. #12
    Points: 410,194, Level: 100
    Level completed: 0%, Points required for next Level: 0
    Overall activity: 100.0%
    Achievements:
    SocialRecommendation Second ClassYour first GroupOverdrive50000 Experience PointsTagger First ClassVeteran
    Awards:
    Frequent Poster
    Chris's Avatar Senior Member
    Karma
    390252
    Join Date
    Feb 2012
    Posts
    136,699
    Points
    410,194
    Level
    100
    Thanks Given
    12,987
    Thanked 37,820x in 28,017 Posts
    Mentioned
    1622 Post(s)
    Tagged
    2 Thread(s)
    It was Turing, iirc, who said computer can't be both intelligent and infallible. IOW, to be intelligent, it must make mistakes, commit errors. That's how AI learns, by testing mistakes and learning from them. That's its strength. And weakness.

    Edmund Burke: "In vain you tell me that Artificial Government is good, but that I fall out only with the Abuse. The Thing! the Thing itself is the Abuse!"

  3. The Following User Says Thank You to Chris For This Useful Post:

    RadioGod (06-10-2018)

  4. #13
    Points: 1,250, Level: 7
    Level completed: 50%, Points required for next Level: 300
    Overall activity: 33.0%
    Achievements:
    31 days registered1000 Experience Points
    RadioGod's Avatar Senior Member
    Karma
    163
    Join Date
    Apr 2018
    Posts
    469
    Points
    1,250
    Level
    7
    Thanks Given
    264
    Thanked 153x in 122 Posts
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by Chris View Post
    It was Turing, iirc, who said computer can't be both intelligent and infallible. IOW, to be intelligent, it must make mistakes, commit errors. That's how AI learns, by testing mistakes and learning from them. That's its strength. And weakness.

    Thank you for the link. Without a doubt, when AI makes mistakes, it is usually a big mistake. And factoring in Moore's Law, things can get hairy. These systems of course are prone to failure, just like the speech recognition in someone's phone. But there is a trend in America, and around the developed world, where we just take all the magic for granted, and that leads us to assume what our devices tell us is always true.
    I'll add a quick link of 1 video(top) especially about errors with AI, and I'll also add a good one(bottom) by the "father" of modern AI, so everyone can see his vision of the future.




    I live with a profound happiness that can only be achieved by being hated by Mr.Veritis

  5. #14
    Points: 410,194, Level: 100
    Level completed: 0%, Points required for next Level: 0
    Overall activity: 100.0%
    Achievements:
    SocialRecommendation Second ClassYour first GroupOverdrive50000 Experience PointsTagger First ClassVeteran
    Awards:
    Frequent Poster
    Chris's Avatar Senior Member
    Karma
    390252
    Join Date
    Feb 2012
    Posts
    136,699
    Points
    410,194
    Level
    100
    Thanks Given
    12,987
    Thanked 37,820x in 28,017 Posts
    Mentioned
    1622 Post(s)
    Tagged
    2 Thread(s)
    Right, but the point is an AI system needs to make mistakes to learn. The video shows the use of a genetic algorithm to build a vehicle to handle terrain as it encounters it. These mutations are tried out and if they fail, die out. Eventually, as the search space is explored as solution is found. That strength is also it's weakness. Throw something completely new at it and it will flounder.
    Edmund Burke: "In vain you tell me that Artificial Government is good, but that I fall out only with the Abuse. The Thing! the Thing itself is the Abuse!"

  6. #15
    Points: 1,250, Level: 7
    Level completed: 50%, Points required for next Level: 300
    Overall activity: 33.0%
    Achievements:
    31 days registered1000 Experience Points
    RadioGod's Avatar Senior Member
    Karma
    163
    Join Date
    Apr 2018
    Posts
    469
    Points
    1,250
    Level
    7
    Thanks Given
    264
    Thanked 153x in 122 Posts
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by Chris View Post
    Right, but the point is an AI system needs to make mistakes to learn. The video shows the use of a genetic algorithm to build a vehicle to handle terrain as it encounters it. These mutations are tried out and if they fail, die out. Eventually, as the search space is explored as solution is found. That strength is also it's weakness. Throw something completely new at it and it will flounder.
    True. That is where AGI, or general AI comes into play. They train up the AI in many problem-solving areas, an coupled with enormous data and tremenously advanced coding that allows the AI to code itself, it is way faster and more advanced than the video showed.
    The AI shown in the video you posted could be run on a $4,000 pc, and even then, the process would be sped up hundreds of times. As tensor cores find their way into laptop graphics cards in a couple years, a simulation like the one in the video would be able to be run on a $200 laptop, and thousands of times faster. This means every kid nerd will be nurturing their own AI in their room, and all the big AI's will be nurturing all of us along in society.
    I live with a profound happiness that can only be achieved by being hated by Mr.Veritis

  7. #16
    Points: 1,250, Level: 7
    Level completed: 50%, Points required for next Level: 300
    Overall activity: 33.0%
    Achievements:
    31 days registered1000 Experience Points
    RadioGod's Avatar Senior Member
    Karma
    163
    Join Date
    Apr 2018
    Posts
    469
    Points
    1,250
    Level
    7
    Thanks Given
    264
    Thanked 153x in 122 Posts
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Here is a leaked Google internal-only video about where AI could take us by nuturing us along

    I live with a profound happiness that can only be achieved by being hated by Mr.Veritis

  8. #17
    Points: 91,559, Level: 73
    Level completed: 73%, Points required for next Level: 691
    Overall activity: 52.0%
    Achievements:
    50000 Experience PointsSocialVeteran
    donttread's Avatar Senior Member
    Karma
    77552
    Join Date
    Nov 2013
    Posts
    30,560
    Points
    91,559
    Level
    73
    Thanks Given
    4,093
    Thanked 9,517x in 7,156 Posts
    Mentioned
    192 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by RadioGod View Post
    Artificial intelligence is basically just a computer program that incorporates set instructions, a goal, and access to information. These basic blocks allow the the AI to spot patterns and connections, then apply it's instruction sets to achieve it's goals. Most sucessful AI's make use of integrated programs called "deep learning", where a feedback-type loop is programmed in, enabling an AI to "learn", almost on it's own.

    Because of the way an AI is structured, there are alot of ways to enhance it's scope and capabilities. Starting with it's basic program, the parameters can be extended and deep learning can be used to allow an AI to even make it's own code more efficient. Even reasearchers these days cannot really tell sometimes what an AI has done to itself to make it work better.

    Then there is the goal or goals of the AI. These can be simple, like learning how to play checkers, or they can be almost infinitely complex, like learning how to trade on the stock market. The goal in both cases, of course, is to win. Winning is assigned certain parameters. In the case of checkers, it is to eliminate the opponents pieces, and in the case of the stock market, winning is accumulating money. Goals are not usually set by the AI, but by the programmers.
    Data is the most critical element of any AI. The more data it has access to, the more accurate it becomes. In the case of the stock market AI, to spot the trends necessary to make profitable trades requires tremendous amounts of data. Weather, crop data, business data, mining data, politics, consumer data, global calendar data(like holidays), banking data, etc. Every facet of life on our planet affects stock market trades. For something so infinitely complex, having continuous access to all possible historical and real-time data is critical to an AI. It can't be competitive without it.

    Most AI's today are not geared towards solving the world's great problems. A small group of people cannot afford the programmers, equipment, and information access needed to get one up and running. Large functioning AI's fall squarely under the wings of universities with research grants, government agencies, and large corporations. And these AI's are goal oriented towards predicting, controlling, and profiting in their respective markets.

    Given that information access is the most important facet of AI accuracy, and thereby, profitabilty and market control, is it any suprise that AI systems are developing at a pace that coincides with the errosion of internet privacy? Recently we have witnessed the net neutrality repeal. Now Internet Service Providers (ISP's) are no longer under FCC regulations which mandated all internet traffic be treated equally. The ISP's are now under FTC rules. There are only 2 things the FTC can enforce:
    1.) Anti-trust violations. No ISP is allowed to monopolize a market. Arguably, may ISP's already do in certain areas where they are the only broadband carrier and their only competition is dial-up or sattellite.
    2.) Violations of their posted User Agreements. If an ISP decides to post a user agreement for use of it's services, it can be fined for not abiding by that agreement.

    These changes allow ISP's the same lattitude as any tech company such as Microsoft or Google, to collect data and package it for sale. The big difference being, tech companies only collect data when you use their services or apps. The ISP that you use for internet access collects everything you do across all devices, services, and apps. If you connect it to your internet, your ISP can store it and sell it. And is it any suprise that the United Kingdom has rolled out GDPR at the same time as net neutrality here in America? And under the GDPR, the rules are almost exactly identical to the FTC's. Companies just give a user agreement notice. What do you do when your Steam app or Banking app ask you to agree to their user agreement? You click "agree". Unless you want to lose access to your money or things you spent money on already. Say no to your ISP, and you will be without internet. So many things require internet today that life is becoming almost impossible without it.

    Who are the customers of these big AI systems? Most people know that advertising is driven by this data. All the big tech companies sell targeted ads. But this is small time compared with the big customers, AI developers that work for large businesses and governments. And most of the government access to that data is actually funneled through businesses. By placing their companies in a middleman position with government agencies, they have significantly grown profits from the pockets of the taxpayer, and made themselves essential parts of the government. In truth, they have made their companies, using contractor status, a defacto part of most government agencies. As the demand for new goals and objectives by government agencies rises, so will the integration of corporate influence and control on our government. They will be one and the same. One could argue it already is that way.

    New technologies are also in the works to allow for real-time collection of data we have never seen before. Brain-computer interface(BCI) technologies that allow a person's brainwaves to be read. They can gauge emotions, thoughts, and mental pictures. They can tell what you are looking at. They can tell if you are paying attention or daydreaming. And worse yet, these same technologies that allow your thoughts and emotions to be captured in real time, can also easily work in reverse, which is the real-time inputting of thoughts and emotions into a person. As this technology evolves in the next few years, we won't know our own thoughts and feelings from AI induced ones. No longer will a government need to create controversies that manipulate the will of the public to fulfill an objective. No longer will companies spend money on advertising campaigns to convince you they are acting in society's best interests. And whoever wins a vote for public office will come down to who has hired the best AI firm with the most data access.

    As the demand for more information, anaylitics, and control goes up, our privacy goes down. As AI becomes more invasive in our lives, it will steer us in the direction someone wants us to go. Sometimes subtly, sometimes overt. And the people in control of these marvels of engineering are not after a better life for your community or country. They are after profits and control.

    https://www.inverse.com/article/3873...ne-fcc-meeting
    https://www.dailydot.com/layer8/what...eutrality-fcc/
    https://store.neurosky.com/pages/mindwave
    https://safehaven.com/article/45383/...ing-Technology
    https://wolfstreet.com/2018/05/22/ch...-bad-behavior/
    https://en.wikipedia.org/wiki/Social_Credit_System

    Yup , we need to take the process back

+ Reply to Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts


Critical Acclaim
Single Sign On provided by vBSSO