User Tag List

+ Reply to Thread
Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 25

Thread: AI and Loss of Liberties

  1. #11
    Points: 3,207, Level: 13
    Level completed: 32%, Points required for next Level: 343
    Overall activity: 9.0%
    Achievements:
    1000 Experience Points3 months registeredSocial
    RadioGod's Avatar Senior Member
    Karma
    415
    Join Date
    Apr 2018
    Posts
    974
    Points
    3,207
    Level
    13
    Thanks Given
    565
    Thanked 405x in 302 Posts
    Mentioned
    10 Post(s)
    Tagged
    0 Thread(s)
    Found another decent link:

    I live with a profound happiness that can only be achieved by being hated by Mr.Veritis

  2. #12
    Points: 427,907, Level: 100
    Level completed: 0%, Points required for next Level: 0
    Overall activity: 100.0%
    Achievements:
    SocialRecommendation Second ClassYour first GroupOverdrive50000 Experience PointsTagger First ClassVeteran
    Awards:
    Discussion Ender
    Chris's Avatar Senior Member
    Karma
    392930
    Join Date
    Feb 2012
    Posts
    141,691
    Points
    427,907
    Level
    100
    Thanks Given
    13,939
    Thanked 40,498x in 30,008 Posts
    Mentioned
    1656 Post(s)
    Tagged
    2 Thread(s)
    It was Turing, iirc, who said computer can't be both intelligent and infallible. IOW, to be intelligent, it must make mistakes, commit errors. That's how AI learns, by testing mistakes and learning from them. That's its strength. And weakness.

    Edmund Burke: "In vain you tell me that Artificial Government is good, but that I fall out only with the Abuse. The Thing! the Thing itself is the Abuse!"

  3. The Following User Says Thank You to Chris For This Useful Post:

    RadioGod (06-10-2018)

  4. #13
    Points: 3,207, Level: 13
    Level completed: 32%, Points required for next Level: 343
    Overall activity: 9.0%
    Achievements:
    1000 Experience Points3 months registeredSocial
    RadioGod's Avatar Senior Member
    Karma
    415
    Join Date
    Apr 2018
    Posts
    974
    Points
    3,207
    Level
    13
    Thanks Given
    565
    Thanked 405x in 302 Posts
    Mentioned
    10 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by Chris View Post
    It was Turing, iirc, who said computer can't be both intelligent and infallible. IOW, to be intelligent, it must make mistakes, commit errors. That's how AI learns, by testing mistakes and learning from them. That's its strength. And weakness.

    Thank you for the link. Without a doubt, when AI makes mistakes, it is usually a big mistake. And factoring in Moore's Law, things can get hairy. These systems of course are prone to failure, just like the speech recognition in someone's phone. But there is a trend in America, and around the developed world, where we just take all the magic for granted, and that leads us to assume what our devices tell us is always true.
    I'll add a quick link of 1 video(top) especially about errors with AI, and I'll also add a good one(bottom) by the "father" of modern AI, so everyone can see his vision of the future.




    I live with a profound happiness that can only be achieved by being hated by Mr.Veritis

  5. #14
    Points: 427,907, Level: 100
    Level completed: 0%, Points required for next Level: 0
    Overall activity: 100.0%
    Achievements:
    SocialRecommendation Second ClassYour first GroupOverdrive50000 Experience PointsTagger First ClassVeteran
    Awards:
    Discussion Ender
    Chris's Avatar Senior Member
    Karma
    392930
    Join Date
    Feb 2012
    Posts
    141,691
    Points
    427,907
    Level
    100
    Thanks Given
    13,939
    Thanked 40,498x in 30,008 Posts
    Mentioned
    1656 Post(s)
    Tagged
    2 Thread(s)
    Right, but the point is an AI system needs to make mistakes to learn. The video shows the use of a genetic algorithm to build a vehicle to handle terrain as it encounters it. These mutations are tried out and if they fail, die out. Eventually, as the search space is explored as solution is found. That strength is also it's weakness. Throw something completely new at it and it will flounder.
    Edmund Burke: "In vain you tell me that Artificial Government is good, but that I fall out only with the Abuse. The Thing! the Thing itself is the Abuse!"

  6. #15
    Points: 3,207, Level: 13
    Level completed: 32%, Points required for next Level: 343
    Overall activity: 9.0%
    Achievements:
    1000 Experience Points3 months registeredSocial
    RadioGod's Avatar Senior Member
    Karma
    415
    Join Date
    Apr 2018
    Posts
    974
    Points
    3,207
    Level
    13
    Thanks Given
    565
    Thanked 405x in 302 Posts
    Mentioned
    10 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by Chris View Post
    Right, but the point is an AI system needs to make mistakes to learn. The video shows the use of a genetic algorithm to build a vehicle to handle terrain as it encounters it. These mutations are tried out and if they fail, die out. Eventually, as the search space is explored as solution is found. That strength is also it's weakness. Throw something completely new at it and it will flounder.
    True. That is where AGI, or general AI comes into play. They train up the AI in many problem-solving areas, an coupled with enormous data and tremenously advanced coding that allows the AI to code itself, it is way faster and more advanced than the video showed.
    The AI shown in the video you posted could be run on a $4,000 pc, and even then, the process would be sped up hundreds of times. As tensor cores find their way into laptop graphics cards in a couple years, a simulation like the one in the video would be able to be run on a $200 laptop, and thousands of times faster. This means every kid nerd will be nurturing their own AI in their room, and all the big AI's will be nurturing all of us along in society.
    I live with a profound happiness that can only be achieved by being hated by Mr.Veritis

  7. #16
    Points: 3,207, Level: 13
    Level completed: 32%, Points required for next Level: 343
    Overall activity: 9.0%
    Achievements:
    1000 Experience Points3 months registeredSocial
    RadioGod's Avatar Senior Member
    Karma
    415
    Join Date
    Apr 2018
    Posts
    974
    Points
    3,207
    Level
    13
    Thanks Given
    565
    Thanked 405x in 302 Posts
    Mentioned
    10 Post(s)
    Tagged
    0 Thread(s)
    Here is a leaked Google internal-only video about where AI could take us by nuturing us along

    I live with a profound happiness that can only be achieved by being hated by Mr.Veritis

  8. #17
    Points: 98,217, Level: 76
    Level completed: 34%, Points required for next Level: 1,733
    Overall activity: 37.0%
    Achievements:
    50000 Experience PointsSocialVeteran
    donttread's Avatar Senior Member
    Karma
    78470
    Join Date
    Nov 2013
    Posts
    32,988
    Points
    98,217
    Level
    76
    Thanks Given
    4,672
    Thanked 10,435x in 7,851 Posts
    Mentioned
    203 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by RadioGod View Post
    Artificial intelligence is basically just a computer program that incorporates set instructions, a goal, and access to information. These basic blocks allow the the AI to spot patterns and connections, then apply it's instruction sets to achieve it's goals. Most sucessful AI's make use of integrated programs called "deep learning", where a feedback-type loop is programmed in, enabling an AI to "learn", almost on it's own.

    Because of the way an AI is structured, there are alot of ways to enhance it's scope and capabilities. Starting with it's basic program, the parameters can be extended and deep learning can be used to allow an AI to even make it's own code more efficient. Even reasearchers these days cannot really tell sometimes what an AI has done to itself to make it work better.

    Then there is the goal or goals of the AI. These can be simple, like learning how to play checkers, or they can be almost infinitely complex, like learning how to trade on the stock market. The goal in both cases, of course, is to win. Winning is assigned certain parameters. In the case of checkers, it is to eliminate the opponents pieces, and in the case of the stock market, winning is accumulating money. Goals are not usually set by the AI, but by the programmers.
    Data is the most critical element of any AI. The more data it has access to, the more accurate it becomes. In the case of the stock market AI, to spot the trends necessary to make profitable trades requires tremendous amounts of data. Weather, crop data, business data, mining data, politics, consumer data, global calendar data(like holidays), banking data, etc. Every facet of life on our planet affects stock market trades. For something so infinitely complex, having continuous access to all possible historical and real-time data is critical to an AI. It can't be competitive without it.

    Most AI's today are not geared towards solving the world's great problems. A small group of people cannot afford the programmers, equipment, and information access needed to get one up and running. Large functioning AI's fall squarely under the wings of universities with research grants, government agencies, and large corporations. And these AI's are goal oriented towards predicting, controlling, and profiting in their respective markets.

    Given that information access is the most important facet of AI accuracy, and thereby, profitabilty and market control, is it any suprise that AI systems are developing at a pace that coincides with the errosion of internet privacy? Recently we have witnessed the net neutrality repeal. Now Internet Service Providers (ISP's) are no longer under FCC regulations which mandated all internet traffic be treated equally. The ISP's are now under FTC rules. There are only 2 things the FTC can enforce:
    1.) Anti-trust violations. No ISP is allowed to monopolize a market. Arguably, may ISP's already do in certain areas where they are the only broadband carrier and their only competition is dial-up or sattellite.
    2.) Violations of their posted User Agreements. If an ISP decides to post a user agreement for use of it's services, it can be fined for not abiding by that agreement.

    These changes allow ISP's the same lattitude as any tech company such as Microsoft or Google, to collect data and package it for sale. The big difference being, tech companies only collect data when you use their services or apps. The ISP that you use for internet access collects everything you do across all devices, services, and apps. If you connect it to your internet, your ISP can store it and sell it. And is it any suprise that the United Kingdom has rolled out GDPR at the same time as net neutrality here in America? And under the GDPR, the rules are almost exactly identical to the FTC's. Companies just give a user agreement notice. What do you do when your Steam app or Banking app ask you to agree to their user agreement? You click "agree". Unless you want to lose access to your money or things you spent money on already. Say no to your ISP, and you will be without internet. So many things require internet today that life is becoming almost impossible without it.

    Who are the customers of these big AI systems? Most people know that advertising is driven by this data. All the big tech companies sell targeted ads. But this is small time compared with the big customers, AI developers that work for large businesses and governments. And most of the government access to that data is actually funneled through businesses. By placing their companies in a middleman position with government agencies, they have significantly grown profits from the pockets of the taxpayer, and made themselves essential parts of the government. In truth, they have made their companies, using contractor status, a defacto part of most government agencies. As the demand for new goals and objectives by government agencies rises, so will the integration of corporate influence and control on our government. They will be one and the same. One could argue it already is that way.

    New technologies are also in the works to allow for real-time collection of data we have never seen before. Brain-computer interface(BCI) technologies that allow a person's brainwaves to be read. They can gauge emotions, thoughts, and mental pictures. They can tell what you are looking at. They can tell if you are paying attention or daydreaming. And worse yet, these same technologies that allow your thoughts and emotions to be captured in real time, can also easily work in reverse, which is the real-time inputting of thoughts and emotions into a person. As this technology evolves in the next few years, we won't know our own thoughts and feelings from AI induced ones. No longer will a government need to create controversies that manipulate the will of the public to fulfill an objective. No longer will companies spend money on advertising campaigns to convince you they are acting in society's best interests. And whoever wins a vote for public office will come down to who has hired the best AI firm with the most data access.

    As the demand for more information, anaylitics, and control goes up, our privacy goes down. As AI becomes more invasive in our lives, it will steer us in the direction someone wants us to go. Sometimes subtly, sometimes overt. And the people in control of these marvels of engineering are not after a better life for your community or country. They are after profits and control.

    https://www.inverse.com/article/3873...ne-fcc-meeting
    https://www.dailydot.com/layer8/what...eutrality-fcc/
    https://store.neurosky.com/pages/mindwave
    https://safehaven.com/article/45383/...ing-Technology
    https://wolfstreet.com/2018/05/22/ch...-bad-behavior/
    https://en.wikipedia.org/wiki/Social_Credit_System

    Yup , we need to take the process back

  9. #18
    Points: 10,756, Level: 24
    Level completed: 89%, Points required for next Level: 94
    Overall activity: 27.0%
    Achievements:
    10000 Experience Points1 year registeredSocial
    Just AnotherPerson's Avatar Senior Member
    Karma
    18931
    Join Date
    Aug 2016
    Posts
    1,720
    Points
    10,756
    Level
    24
    Thanks Given
    1,454
    Thanked 892x in 652 Posts
    Mentioned
    19 Post(s)
    Tagged
    0 Thread(s)




    'Whoever leads in AI will rule the world’: Putin to Russian children on Knowledge Day
    https://www.rt.com/news/401731-ai-rule-world-putin/

    Vladimir Putin spoke with students about science in an open lesson on September 1, the start of the school year in Russia. He told them that “the future belongs to artificial intelligence,” and whoever masters it first will rule the world.
    “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world,” Russian President Vladimir Putin said.
    However, the president said he would not like to see anyone “monopolize” the field.
    “If we become leaders in this area, we will share this know-how with entire world, the same way we share our nuclear technologies today,” he told students from across Russia via satellite link-up, speaking from the Yaroslavl region.
    We are all brothers and sisters in humanity. We are all made from the same dust of stars. We cannot be separated because all life is interconnected.

  10. #19
    Points: 770, Level: 6
    Level completed: 10%, Points required for next Level: 180
    Overall activity: 0%
    Achievements:
    500 Experience Points3 months registered
    zachroidott's Avatar Senior Member
    Karma
    76
    Join Date
    Jul 2018
    Posts
    242
    Points
    770
    Level
    6
    Thanks Given
    1
    Thanked 66x in 52 Posts
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by Chris View Post
    Only by learning can AI become that powerful.
    It is learning. It's mastered chess by playing chess masters and racism from chatting on line.

  11. #20
    Points: 10,756, Level: 24
    Level completed: 89%, Points required for next Level: 94
    Overall activity: 27.0%
    Achievements:
    10000 Experience Points1 year registeredSocial
    Just AnotherPerson's Avatar Senior Member
    Karma
    18931
    Join Date
    Aug 2016
    Posts
    1,720
    Points
    10,756
    Level
    24
    Thanks Given
    1,454
    Thanked 892x in 652 Posts
    Mentioned
    19 Post(s)
    Tagged
    0 Thread(s)
    Senators are asking whether artificial intelligence could violate US civil rights laws


    https://qz.com/1398491/senators-are-...l-rights-laws/



    Seven members of the US Congress have sent letters to the Federal Trade Commission (pdf), Federal Bureau of Investigation (pdf), and Equal Employment Opportunity Commission (pdf) asking whether the agencies have vetted the potential biases of artificial intelligence algorithms being used for commerce, surveillance, and hiring.“We are concerned by the mounting evidence that these technologies can perpetuate gender, racial, age, and other biases,” a letter to the FTC says. “As a result, their use may violate civil rights laws and could be unfair and deceptive.”
    We are all brothers and sisters in humanity. We are all made from the same dust of stars. We cannot be separated because all life is interconnected.

+ Reply to Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts


Critical Acclaim
Single Sign On provided by vBSSO