• Quote of the week

    “Our government satellite surveillance systems are a new way for criminals to gain possession not only of our financial lives, but our most precious resource: Our minds. What can we do and who are these individuals who are trying to control the way we think, feel, act and what we do?"
    -- A New Breed - Satellite Terrorism in America

Can DARPA CREATE an AI for unmanned-unmanned teaming?

 

A new opportunity would fund development of an AI framework to coordinate actions between a mix of machines on the battlefield.

Humans learn by doing. The shared experience of hardship, enduring and overcoming is what bonds disparate recruits into functional teams, who over time learn each other’s weaknesses and strengths and, ideally at least, then adapt to best use each other. As more robots move onto the battlefield, DARPA wants those machines to work together, learn from each other to do better and move away from actions which cause regret. To spark research into this area, the Pentagon’s blue sky projects wing launched “CREATE,” or “Context Reasoning for Autonomous Teaming.”

The Artificial Intelligence Exploration Opportunity, announced Sept. 3, looks for research into how a group of small and disparate uncrewed vehicles could work together autonomously. Phase 1 requires feasibility studies, and Phase 2 is refining AI teaming techniques and algorithms from Phase 1 to work on vehicles with existing hardware, in simulation or on the actual hardware.

In much the same way that a group of people make decisions together on the fly, the solicitation notes that “local decision making is less informed and suboptimal but is infinitely scalable, naturally applicable to heterogeneous teams, and fast.”

For robots that have to work together in battle, those last traits are especially important, as they allow independent autonomous action, “thus breaking the reliance on centralized C2 and the need for pre-planned cost function definition.”

This is a step beyond the remotely directed and controlled systems of today, which use extensive communications networks to give humans fine-tuned controls over how machines move. Should those networks break down, machines that can move toward objectives on their own is a goal, even if those moves are less efficient or effective than the choices a human operator would have made. Advances in electronic warfare, combined with fears about the the loss of communication networks, both terrestrial and in orbit, are part of what’s driving military research and investment in autonomous machines.

What sets CREATE apart from, say, swarming systems of quadcopters, is that DARPA wants to find a framework that can communicate with a heterogeneous group of machines: likely quadcopters and unmanned ground vehicles too, different kinds of flying and swimming robots. In other words, a whole mechanical menagerie working to a similar purpose. With the right AI tool, the machine-machine team should be able to discern the context of where they are, what is happening, and then act independently. In addition, they can meet multiple spontaneous goals that arise over the course of a mission.

 

 

Getting to that point means a system that can learn and, especially, a system that can learn from mistakes.

“Agents within the team will have mechanisms for regulation to ensure (favorable) emergent behavior of the team to (1) better ensure the desired mission outcome and (2) bound the cost of unintended adverse action or ‘regret,’” reads the solicitation.

 

 

Bread & Circus Perfect Product Placement

This years Super Bowl commercials took advantage of embedding AI’s technological advancements into the mind of the horde like this one below from the Agency FCB Chicago. How can you compete against robots that out-run, out-bike, and out-perform humans in just about every way?

 

Experts warn of actual AI risks – we’re about to live in a sci-fi movie

Long before artificial intelligence (AI) was even a real thing, science fiction novels and films have warned us about the potentially catastrophic dangers of giving machines too much power.

Now that AI actually exists, and in fact, is fairly widespread, it may be time to consider some of the potential drawbacks and dangers of the technology, before we find ourselves in a nightmarish dystopia the likes of which we’ve only begun to imagine.

Experts from the industry as well as academia have done exactly that, in a recently released 100-page report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, Mitigation.”

The report was written by 26 experts over the course of a two-day workshop held in the UK last month. The authors broke down the potential negative uses of artificial intelligence into three categories – physical, digital, or political.

In the digital category are listed all of the ways that hackers and other criminals can use these advancements to hack, phish, and steal information more quickly and easily. AI can be used to create fake emails and websites for stealing information, or to scan software for potential vulnerabilities much more quickly and efficiently than a human can. AI systems can even be developed specifically to fool other AI systems.

Physical uses included AI-enhanced weapons to automate military and/or terrorist attacks. Commercial drones can be fitted with artificial intelligence programs, and automated vehicles can be hacked for use as weapons. The report also warns of remote attacks, since AI weapons can be controlled from afar, and, most alarmingly, “robot swarms” – which are, horrifyingly, exactly what they sound like.

Read also: Is artificial intelligence going too far, moving too quickly?

Lastly, the report warned that artificial intelligence could be used by governments and other special interest entities to influence politics and generate propaganda.

AI systems are getting creepily good at generating faked images and videos – a skill that would make it all too easy to create propaganda from scratch. Furthermore, AI can be used to find the most important and vulnerable targets for such propaganda – a potential practice the report calls “personalized persuasion.” The technology can also be used to squash dissenting opinions by scanning the internet and removing them.

The overall message of the report is that developments in this technology are “dual use” — meaning that AI can be created that is either helpful to humans, or harmful, depending on the intentions of the people programming it.

That means that for every positive advancement in AI, there could be a villain developing a malicious use of the technology. Experts are already working on solutions, but they won’t know exactly what problems they’ll have to combat until those problems appear.

The report concludes that all of these evil-minded uses for these technologies could easily be achieved within the next five years. Buckle up because they are here.

2018 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS)

In 2016, the Fifth Review Conference of the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS). The GEE held its first meeting from 13 to 17 November 2017 in Geneva.

At their 2017 Meeting, the High Contracting Parties to the CCW agreed that the GGE on LAWS shall meet again in 2018 for a duration of ten days in Geneva. The first meeting of the GGE on LAWS in 2018 took place from 9 to 13 April. The second meeting will be held from 27 to 31 August 2018. The meeting will take place in Conference Room XVIII on 27 August and in Room XX from 28 to 31 August 2018. Ambassador Amandeep Singh Gill of India is the chair of both meetings of the GGE on LAWS.

The final report of the 2017 meeting of the GGE on LAWS, particularly the “Conclusions and Recommendations” section, provides guidance and direction for the work of the GGE to be undertaken in 2018.

The overarching issues in the area of LAWS that will be addressed in the 2018 meetings of the GGE include:

  1. Characterization of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the CCW;
  2. Further consideration of the human element in the use of lethal force; aspects of human-machine interaction in the development, deployment and use of emerging technologies in the area of lethal autonomous weapons systems;
  3. Review of potential military applications of related technologies in the context of the Group’s work;
  4. Possible options for addressing the humanitarian and international security challenges posed by emerging technologies in the area of LAWS in the context of the objectives and purposes of the Convention without prejudging policy outcomes and taking into account past, present and future proposals.

Documents

Report of the 2018 session of the Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems

Emerging Commonalities, Conclusions and Recommendations (including Possible Guiding Principles) – Unformatted advance version

Chair’s summary of the discussion on agenda items 6 a, b, c and d, 9-13 April 2018

List of Participants + List of Participants Addendum (April)
Final List of Participants

Provisional Agenda

Provisional Programme of Work

Letter from the Chairperson of the Group of Governmental Experts related to emerging technologies in the area of lethal autonomous weapons systems (LAWS), dated 10 January 2018

The Chair of the CCW GGE on LAWS would like to invite ALL non-governmental actors to contribute reflections, ideas, insights and experiences to enrich the April and August deliberations of governmental experts. Please refer to the newly released programme and agenda to frame your contribution(s).

You may also submit contributions to ccw@un.org

Sources:

Can DARPA CREATE an AI for unmanned-unmanned teaming? 

Experts warn of actual AI risks – we’re about to live in a sci fi movie

References:

Similar Posts:

Total Page Visits: 1047 - Today Page Visits: 1

8 Comments:

  1. It’s really a great and useful piece of information. I am glad that you shared this helpful info with us. Please keep us informed like this. Thanks for sharing.

  2. You should take part in a contest for one of the best blogs on the web. I will recommend this site!

  3. There is evidently a bunch to know about this. I think you made certain nice points in features also.

  4. Terrific work! This is the type of info that should be shared around the internet. Shame on Google for not positioning this post higher! Come on over and visit my web site . Thanks =)

  5. I adore assembling utile information , this post has got me even more info! .

  6. Of course, what a magnificent blog and enlightening posts, I will bookmark your blog.All the Best!

  7. Pretty section of content. I just stumbled upon your site and in accession capital to assert that I acquire actually enjoyed account your blog posts. Anyway I’ll be subscribing to your augment and even I achievement you access consistently quickly.

  8. It’s very trouble-free to find out any matter on net as compared to textbooks,
    as I found this post at this site. Ahaa, its good discussion regarding this piece of writing here at this website, I
    have read all that, so at this time me also commenting at this place.
    I am sure this piece of writing has touched all the internet visitors, its really really fastidious paragraph
    on building up new web site.

Leave a Reply

Your email address will not be published. Required fields are marked *

  • Your online freedom is just seconds away.

    Buy VPN with Bitcoin, PayPal, Credit Card | Get Your First 30 Days FREE

  • Famous Quotes In History

    "I think the subject which will be of most importance politically is mass psychology....Although this science will be diligently studied, it will be rigidly confined to the governing class. The populace will not be allowed to know how its convictions were generated."
    -- Bertrand Russell in The Impact of Science on Society  
     
    “Beware the leader who bangs the drums of war in order to whip the citizenry into a patriotic fervor, for patriotism is indeed a double-edged sword. It both emboldens the blood, just as it narrows the mind. And when the drums of war have reached a fever pitch and the blood boils with hate and the mind has closed, the leader will have no need in seizing the rights of the citizenry. Rather, the citizenry, infused with fear and blinded by patriotism, will offer up all of their rights unto the leader and gladly so. How do I know? For this is what I have done. And I am Caesar.”
    – Julius Caesar  
     
    Past Famous Quotes | Archive