MIT.edu

Thesis: A strategic perspective on the commercialization of artificial intelligence

Submitted by Siddhartha Ray Barua.

Abstract: Many companies are increasing their focus on Artificial Intelligence as they incorporate Machine Learning and Cognitive technologies into their current offerings. Industries ranging from healthcare, pharmaceuticals, finance, automotive, retail, manufacturing and so many others are all trying to deploy and scale enterprise Al systems while reducing their risk. Companies regularly struggle with finding appropriate and applicable use cases around Artificial Intelligence and Machine Learning projects. The field of Artificial Intelligence has a rich set of literature for modeling of technical systems that implement Machine Learning and Deep Learning methods. This thesis attempts to connect the literature for business and technology and for evolution and adoption of technology to the emergent properties of Artificial Intelligence systems. The aim of this research is to identify high and low value market segments and use cases within the industries, prognosticate the evolution of different Al technologies and begin to outline the implications of commercialization of such technologies for various stakeholders. This thesis also provides a framework to better prepare business owners to commercialize Artificial Intelligence technologies to satisfy their strategic goals.

To read the complete article, visit DSpace at the MIT Libraries .

  • DSpace@MIT Home
  • MIT Libraries
  • Graduate Theses

The impact of introducing artificial intelligence technology to architecture and its leverage on the concept of design automation

Thumbnail

Alternative title

Other contributors, terms of use, description, date issued, collections.

  • Open access
  • Published: 23 November 2017

Exploring the impact of artificial intelligence on teaching and learning in higher education

  • Stefan A. D. Popenici   ORCID: orcid.org/0000-0002-0323-2945 1 &
  • Sharon Kerr 2  

Research and Practice in Technology Enhanced Learning volume  12 , Article number:  22 ( 2017 ) Cite this article

189k Accesses

407 Citations

122 Altmetric

Metrics details

This paper explores the phenomena of the emergence of the use of artificial intelligence in teaching and learning in higher education. It investigates educational implications of emerging technologies on the way students learn and how institutions teach and evolve. Recent technological advancements and the increasing speed of adopting new technologies in higher education are explored in order to predict the future nature of higher education in a world where artificial intelligence is part of the fabric of our universities. We pinpoint some challenges for institutions of higher education and student learning in the adoption of these technologies for teaching, learning, student support, and administration and explore further directions for research.

Introduction

The future of higher education is intrinsically linked with developments on new technologies and computing capacities of the new intelligent machines. In this field, advances in artificial intelligence open to new possibilities and challenges for teaching and learning in higher education, with the potential to fundamentally change governance and the internal architecture of institutions of higher education. With answers to the question of ‘what is artificial intelligence’ shaped by philosophical positions taken since Aristotle, there is little agreement on an ultimate definition.

In 1950s, Alan Turing proposed a solution to the question of when a system designed by a human is ‘intelligent.’ Turing proposed the imitation game, a test that involves the capacity of a human listener to make the distinction of a conversation with a machine or another human; if this distinction is not detected, we can admit that we have an intelligent system, or artificial intelligence (AI). It is worth remembering that the focus on AI solutions goes back to 1950s; in 1956 John McCarthy offered one of the first and most influential definitions: “The study [of artificial intelligence] is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” (Russell and Norvig 2010 ).

Since 1956, we find various theoretical understandings of artificial intelligence that are influenced by chemistry, biology, linguistics, mathematics, and the advancements of AI solutions. However, the variety of definitions and understandings remains widely disputed. Most approaches focus on limited perspectives on cognition or simply ignore the political, psychological, and philosophical aspects of the concept of intelligence. For the purpose of our analysis of the impact of artificial intelligence in teaching and learning in higher education, we propose a basic definition informed by the literature review of some previous definitions on this field. Thus, we can define artificial intelligence (AI) as computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and use of data for complex processing tasks.

Artificial intelligence is currently progressing at an accelerated pace, and this already impacts on the profound nature of services within higher education. For example, universities already use an incipient form of artificial intelligence, IBM’s supercomputer Watson. This solution provides student advice for Deakin University in Australia at any time of day throughout 365 days of the year (Deakin University 2014 ). Even if it is based on algorithms suitable to fulfill repetitive and relatively predictable tasks, Watson’s use is an example of the future impact of AI on the administrative workforce profile in higher education. This is changing the structure for the quality of services, the dynamic of time within the university, and the structure of its workforce. A supercomputer able to provide bespoke feedback at any hour is reducing the need to employ the same number of administrative staff previously serving this function. In this context, it is also important to note that ‘machine learning’ is a promising field of artificial intelligence. While some AI solutions remain dependent on programming, some have an inbuilt capacity to learn patterns and make predictions. An example is AlphaGo—a software developed by DeepMind, the AI branch of Google’s—that was able to defeat the world’s best player at Go, a very complex board game (Gibney 2017 ). We define ‘machine learning’ as a subfield of artificial intelligence that includes software able to recognize patterns, make predictions, and apply the newly discovered patterns to situations that were not included or covered by their initial design.

Results and discussion

As AI solutions have the potential to structurally change university administrative services, the realm of teaching and learning in higher education presents a very different set of challenges. Artificial intelligence solutions relate to tasks that can be automated, but cannot be yet envisaged as a solution for more complex tasks of higher learning. The difficulty of supercomputers to detect irony, sarcasm, and humor is marked by various attempts that are reduced to superficial solutions based on algorithms that can search factors such as a repetitive use of punctuations marks, use of capital letters or key phrases (Tsur et al. 2010 ). There is a new hype about possibilities of AI in education, but we have reasons to stay aware of the real limits of AI algorithmic solutions in complex endeavors of learning in higher education.

For example, we can remember that the enthusiastic and unquestioned trust in the AI capabilities of a revolutionary new car led on May 2016 to the death of the driver, when the car set on ‘autopilot’ went underneath a tractor-trailer that was not detected by the software (Reuters/ABC 2016 ). There is also the story of Microsoft’s embarrassing mistake to trust the AI-powered bot named Tay to go unsupervised on Twitter. Confident on the bot capacity to operate independently, Microsoft discovered that Tay turned fast into a racist, bigoted, and hate-spewing account. ‘Tay’ had to be shut down by Microsoft after only 16 h of work. For example, Tay answered the question “Are you a racist?” with a disturbing “because ur mexican”. A Microsoft spokesperson explained that: “The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.” (Perez 2016 ).

There is consistent evidence—some presented in this paper—that AI solutions open a new horizon of possibilities for teaching and learning in higher education. However, it is important to admit the current limits of technology and admit that AI is not (yet) ready to replace teachers, but is presenting the real possibility to augment them. We are now seeing computing algorithms impacting on the most mundane aspects of daily life, from individuals’ credit scores to employability. Higher education is placed at the center of this profound change, which brings with it both extraordinary opportunities and risks. This important crossroad requires careful consideration and analysis from an academic perspective, especially as we can find tendencies to look at technological progress as a solution or replacement for sound pedagogical solutions or good teaching. The real potential of technology in higher education is—when properly used—to extend human capabilities and possibilities of teaching, learning, and research. The purpose of this paper is to kindle scholarly discussions on the evolving field of artificial intelligence in higher education. This stays aligned with some of the most ambitious research agendas in the field, such as the “National Artificial Intelligence Research and Development Strategic Plan,” released by the US President Barack Obama in October 2016. The Report states that “the walls between humans and AI systems are slowly beginning to erode, with AI systems augmenting and enhancing human capabilities. Fundamental research is needed to develop effective methods for human-AI interaction and collaboration” (U.S. National Science and Technology Council 2016 ).

As we note that significant advances in machine learning and artificial intelligence open new possibilities and challenges for higher education, it is important to observe that education is eminently a human-centric endeavor, not a technology centric solution. Despite rapid advancements in AI, the idea that we can solely rely on technology is a dangerous path, and it is important to maintain focus on the idea that humans should identify problems, critique, identify risks, and ask important questions that can start from issues such as privacy, power structures, and control to the requirement of nurturing creativity and leaving an open door to serendipity and unexpected paths in teaching and learning. The hype on AI can lead to an unquestioned panacea that can leave many who are on their path to higher learning under the wheels of reality, such as that tragic event of the driver led under a truck by what was considered to be a matchless software. Maintaining academic skepticism on this issue is especially important in education, as this is an act that can be reduced to information delivery and recollection; we need to maintain its aim to build educated minds and responsible citizens that are attached to general values of humanism.

The role of technology in higher learning is to enhance human thinking and to augment the educational process, not to reduce it to a set of procedures for content delivery, control, and assessment. With the rise of AI solutions, it is increasingly important for educational institutions to stay alert and see if the power of control over hidden algorithms that run them is not monopolized by tech-lords. Frank Pasquale notes in his seminal book ‘The Black Box Society’ that “Decisions that used to be based on human reflection are now made automatically. Software encodes thousands of rules and instructions computed in a fraction of a second” (Pasquale 2015 ). Pasquale is revealing in his book that we do not only have a quasi-concentrated and powerful monopoly over these solutions, but also an intentional lack of transparency on algorithms and how they are used. This is presented casually as a normal state of facts, the natural arrangements of Internet era, but it translates to highly dangerous levels of unquestioned power. Those who control algorithms that run AI solutions have now unprecedented influence over people and every sector of a contemporary society. The internal architecture of the mega-corporations such as Facebook or Google is not following a democratic model, but those of benevolent dictators who know what is best and decide with no consultation with their internal or external subjects. The monopoly and the strong control over sources of information, stifling critique and silencing de facto through invisibilisation views that are not aligned with interest and narratives promoted by techlords’ interests stand in direct opposition with higher learning. Universities have a role if they encourage dissent and open possibilities revealed by it. Higher learning is withering when the freedom of thinking and inquiry is suppressed in any form, as manipulations and the limitation of knowledge distorts and cancel in-depth understandings and the advancement of knowledge. If we reach a point where the agenda of universities is set by a handful of techlords, as well as the control over their information and the ethos of universities, higher education is looking ahead a very different age. The set of risks is too important to be overlooked and not explored with courage and careful analysis.

At the same time, the rapid advancements of AI are doubled by the effort of defunded universities to find economic solutions to balance depleted budgets. AI already presents the capability to replace a large number of administrative staff and teaching assistants in higher education. It is therefore important to explore the effects of these factors on learning in higher education, especially in the context of an increasing demand for initiative, creativity, and ‘entrepreneurial spirit’ for graduates. This paper opens an inquiry into the influence of artificial intelligence (AI) on teaching and learning and higher education. It also operates as an exploratory analysis of literature and recent studies on how AI can change not only how students learn in universities, but also on the entire architecture of higher education.

The rise of artificial intelligence and augmentation in higher education

The introduction and adoption of new technologies in learning and teaching has rapidly evolved over the past 30 years. Looking through the current lens, it is easy to forget the debates that have raged in our institutions over students being allowed to use what are now regarded as rudimentary technologies. In a longitudinal study of accommodations for students with a disability conducted between 1993 and 2005 in the USA, authors remind us of how contentious the debate was surrounding the use of the calculators and spell check programs for students with a disability none-the-less the general student body (Lazarus et al. 2008 ). Assistive technologies—such as text to speech, speech to text, zoom capacity, predictive text, spell checkers, and search engines—are just some examples of technologies initially designed to assist people with a disability. The use of these technological solutions was later expanded, and we find them now as generic features in all personal computers, handheld devices or wearable devices. These technologies now augment the learning interactions of all students globally, enhancing possibilities opened for teaching and design of educational experiences.

Moreover, artificial intelligence (AI) is now enhancing tools and instruments used day by day in cities and campuses around the world. From Internet search engines, smartphone features and apps, to public transport and household appliances. For example, the complex set of algorithms and software that power iPhone’s Siri is a typical example of artificial intelligence solutions that became part of everyday experiences (Bostrom and Yudkowsky 2011 ; Luckin 2017 ). Even if Apple’s Siri is labeled as a low complexity AI solution or simply a voice controlled computer interface, it is important to remember that it started as an artificial intelligence project funded in the USA by the Defense Advanced Research Projects Agency (DARPA) since 2001. This project was turned a year later into a company that was acquired by Apple, which integrated the application in its iPhone operation system in 2007. Google is using AI for its search engines and maps, and all new cars use AI from engine to breaks and navigation. Self-driving technology is already advanced, and some major companies are making this a top priority for development, such as Tesla, Volvo, Mercedes, and Google (Hillier et al. 2015 ) and trials on public roads in Australia commenced in 2015. Remarkably, a mining corporation is already taking advantage of self-driving technologies, now using self-driving trucks for two major exploitations in Western Australia (Diss 2015 ).

Personalized solutions are also closer than we imagined: ‘new scientist’ presented at the end of 2015 the initiative of Talkspace and IBM’s Watson to use artificial intelligence in psychotherapy (Rutkin 2015 ). This seems to be a major step towards changing the complex endeavor of education with AI. In fact, Nick Bostrom, Director of the Future of Humanity Institute at the UK’s Oxford University, observed since 2006 that artificial intelligence is now an integral part of our daily life: “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore” (Bostrom 2006 ). Again, very few people identify today Siri as a typical example of artificial intelligence and more as an algorithm-based personal assistant that is part of everyday life experiences. Given their increasing role within the global digital infrastructure, this also begs the question as to how algorithms are conceived of as we prepare ourselves for a range of different possible futures.

Students are placed now at the forefront of a vast array of possibilities and challenges for learning and teaching in higher education. Solutions for human-AI interaction and collaboration are already available to help people with disabilities. They can inspire educators to apply them in education to augment learners and teachers for a more engaging process. Carl Mitcham describes in his Encyclopedia of Science, Technology and Ethics a cyborg as “a crossbreed of a human and a machine” (Mitcham 2005 ). The idea of cyborgs is not as far away as we may imagine, as the possibilities to combine human capacities with new technologies are already being used and developed at an accelerated pace. For example, Hugh Herr, who is directing the Biomechatronics group at the MIT Media Lab and works with the Harvard–MIT Division of Health Sciences and Technology, recently observed in an interview for ‘new scientist’ that “…disability will end, I’d say, by the end of this century. And I think that’s a very conservative statement. At the rate technology is progressing, most disability will be gone in 50 years” (De Lange 2015 , p. 25). This company is producing technologically advanced prosthetics and exoskeletons, pioneering bionic technology for people with or without a disability. He notes that his research group developed an interface that “uses biology to close the loop between human and machine […] Imagine a world where our physicality doesn’t decrease as we age” (De Lange 2015 , p. 24). Complex computing systems using machine learning algorithms can serve people with all types of abilities and engage to a certain degree in human-like processes and complex processing tasks that can be employed in teaching and learning. This opens to a new era for institutions of higher education.

This type of human-machine interface presents the immediate potential to change the way we learn, memorize, access, and create information. The question of how long it will take to use this type of interface to enhance human memory and cognition is one which we are currently unable to answer. It may turn to reality beyond the end of this century, as the MIT scholar suggests or much sooner when we consider the pace of change in the technologies used in teaching and learning since 2007 when the first iPhone was launched. Since then, not only has the iPhone integrated breakthrough technologies that seemed impossible just a few years ago to how we access and use information (such as fingerprint identification and the ‘intelligent’ Siri assistant), but this technology has introduced a significant cultural shift that impacts on our everyday lives. Either way, if we shift the focus of ‘cyborgs’ from science-fiction to the idea of computer augmented capacity for teachers and students alike, it is not unrealistic to consider that cyborgs—or ‘crossbreeds’ of human and machines—will soon be a reality in teaching and research in universities of the near future.

The impact of artificial intelligence is already visible in the world economy and has captured the attention of many analysts. The largest investment ever made by Google in the European Union is the acquisition in 2014 of DeepMind technologies, with $400 million. DeepMind Technologies, now named Google DeepMind, is a London-based artificial intelligence startup specialized in machine learning and advanced algorithms. Notably, Google also made significant investments in the German Research Centre for Artificial Intelligence (DFKI GmbH), which is, according to their website, “the biggest research center worldwide in the area of Artificial Intelligence and its application, in terms of number of employees and the volume of external funds” (DFKI 2015 ). Tech giants like Apple, Google, Microsoft, and Facebook currently compete in the field of artificial intelligence and are investing heavily in new applications and research. Google announced in December 2015 that the company’s quantum computer called D-Wave 2X will be used for complex operations of AI, generically referred to as optimization problems (Neven 2015 ). This new machine is 100 million times faster than any other contemporary computers, a serious leap ahead for AI, considered by Google researchers as a significant breakthrough: “We hope it helps researchers construct more efficient and more accurate models for everything from speech recognition, to web search, to protein folding” (Neven 2013 ).

This wave of interest and investments in artificial intelligence will soon impact on universities. Most likely, financial pressures related to the large numbers of students currently undertaking higher education driven by the goal of democratization of higher education, and the international student market will stand as a compelling reason to seek out AI solutions. The ‘outsourcing’ of the academic workforce, in terms of numbers of academics employed and tenured positions, is now open to a massive takeover by intelligent machines (Grove 2015 ). ‘Massification’ of higher education and the political call to cut public funding for universities translates into a real need to cut costs. With research still being the main source of funds and prestige in international rankings, the MOOC hype unveiled the tempting solution for many university administrators to cut costs by reducing expensive academic teaching staff. This shift is currently being aggressively pursued in Australian universities, with a constant shift towards casual and short-term contracts; in a study conducted by L.H. Martin Institute it is documented that “…there is an escalating trend in the number and percentage of academic staff on contingent appointments, and a declining trend in the percentage of academic staff with continuing appointments who undertake both teaching and research” (Andrews et al. 2016 ). In the UK, we find various initiatives following the same trend, such as that of University of Warwick, which created a new department to employ all casual teaching staff to outsource teaching. This new department was established to function in a way “similar to another subsidiary used to pay cleaners and catering staff, suitable to serve the University of Warwick and also sell teaching and assessment services to other institutions” (Gallagher 2015 ).

As examples presented in previous page show, the “crossbreed” of the human brain and a machine is already possible, and this will essentially challenge teachers to find new dimensions, functions, and radically new pedagogies for a different context for learning and teaching. For example, brain-computer interfaces (BCIs), that captured the imagination of researchers across the world, are currently recording significant advances. Using brain signals with various recording and analysis methods, along with innovative technological approaches for new computing systems, specialists in the field now provide feasible solutions to remotely control software with a brain-computer interface (Andrea et al. 2015 ). BCIs are now able to capture and decode brain activity to enable communication and control by individuals with motor function disabilities (Wolpaw and Wolpaw 2012 ). Kübler et al. observe that at this point “studies have demonstrated fast and reliable control of brain-computer interfaces (BCIs) by healthy subjects and individuals with neurodegenerative disease alike” (Kübler et al. 2015 ). The concept of humanity and the possibilities of humans stand currently to be redefined by technology with unprecedented speed: technology is quickly expanding the potential to use AI functions to enhance our skills and abilities. As Andreas Schleicher observed, “Innovation in education is not just a matter of putting more technology into more classrooms; it is about changing approaches to teaching so that students acquire the skills they need to thrive in competitive global economies” (Schleicher 2015 ).

Past lessons, possibilities, and challenges of AI solutions

Widening participation in higher education and the continuous increase in the number of students, class sizes, staff costs, and broader financial pressures on universities makes the use of technology or teacherbots a very attractive solution. This became evident when massive open online courses (MOOCs) enlightened the imagination of many university administrators. The understanding of “open courses” is that no entry requirements or fees were required, and online students could enroll and participate from any country in the world with internet access. Both of these factors enabled universities to market globally for students, resulting in massive enrolment numbers. The promise was generous, but it soon became evident that one of the problems created for teachers was their human capacity to actively engage with massive numbers of diverse students studying globally from different time zones, at different rates of progress and with different frames of reference and foundational skills for the course that they are studying. Assisting students in large classes to progress effectively through their learning experience to achieve desired outcomes, conduct assessments, and provide constructive personalized feedback remained unsolved issues. Sian Bayne makes the observation in Teacherbot: Interventions in Automated Teaching , that the current perspective of using automated methods in teaching “are driven by a productivity-oriented solutionism,” not by pedagogical or charitable reasoning, so we need to re-explore a humanistic perspective for mass education to replace the “cold technocratic imperative” (Bayne 2015 ). Bayne speaks from the experience of meeting the need created by the development and delivery of a massive open online course by the University of Edinburgh. This course had approximately 90,000 students from 200 countries enrolled.

The lesson of MOOCs is important and deserves attention. Popenici and Kerr observed that MOOCs were first used in 2008 and since then: “…we have been hearing the promise of a tsunami of change that is coming over higher education. It is not uncommon with a tsunami to see people enticed by the retreat of the waters going to collect shells, thinking that this is the change that is upon them. Tragically, the real change is going to come in the form of a massive wave under which they will perish as they play on the shores. Similarly, we need to take care that we are not deluded to confuse MOOCs, which are figuratively just shells on the seabed, with the massive wall of real change coming our way” (Popenici and Kerr 2013 ). It is becoming clear in 2016 that MOOCs remain just a different kind of online course, interesting and useful, but not really aimed at or capable of changing the structure and function of universities. Research and data on this topic reflect the failure of MOOCs to deliver on their proponents’ promises. More importantly, the unreserved and irrational hype that surrounded MOOCs is a when decision-makers in academia decided to ignore all key principles—such as evidence-based arguments or academic skepticism—and embrace a fad sold by Silicon Valley venture capitalists with no interests in learning other than financial profits. As noted in a recent book chapter “this reckless shift impacts on the sustainability of higher learning in particular and of higher education by and large” (Popenici 2015 ).

There are solid arguments—some cited above in this paper—to state that it is more realistic to consider the impact of machine learning in higher education as the real wave of change. In effect, lessons of the past show why it is so important to avoid the same mistakes revealed by the past fads or to succumb to a convenient complacency that is serving only the agenda of companies that are in search of new (or bigger) markets. Online learning proved very often the potential to successfully help institutions of higher education reach some of the most ambitious goals in learning, teaching, and research. However, the lesson of MOOCs is also that a limited focus on one technology solution without sufficient evidence-based arguments can become a distraction for education and a perilous pathway for the financial sustainability of these institutions.

Higher education is now taking its first steps into the unchartered territory of the possibilities opened by AI in teaching, learning, and higher education organization and governance. Implications and possibilities of these technological advances can already be seen. By way of example, recent advancements in non-invasive brain-computer interfaces and artificial intelligence are opening new possibilities to rethink the role of the teacher, or make steps towards the replacement of teachers with teacher-robots, virtual “teacherbots” (Bayne 2015 ; Botrel et al. 2015 ). Providing affordable solutions to use brain computer interface (BCI) devices capable to measure when a student is fully focused on the content and learning tasks (Chen et al. 2015 ; González et al. 2015 ) is already possible, and super-computers, such as IBM’s Watson, can provide an automated teacher presence for the entire duration of a course. The possibility to communicate and command computers through thought and wider applications of AI in teaching and learning represents the real technological revolution that will dramatically change the structure of higher education across the world. Personalized learning with a teacherbot, or ‘cloud-lecturer’, can be adopted for blended delivery courses or fully online courses. Teacherbots—computing solutions for the administrative part of teaching, dealing mainly with content delivery, basic and administrative feedback and supervision—are already presenting as a disruptive alternative to traditional teaching assistants. An example is offered by the course offered by Professor Ashok Goel on knowledge-based artificial intelligence (KBAI) in the online Master in Computer Sciences program, at Georgia Tech in the USA. The teaching assistant was so valued by students that one wanted to nominate her to the outstanding TA award. This TA managed to meet the highest expectations of students. The surprise at the end of the course was to find out that Jill Watson was not a real person, but a teacherbot, a virtual teaching assistant was based on the IBM’s Watson platform (Maderer 2016 ).

This enlightened the imaginations of many, reaching international news across the world and respected media outlets such as The New Your Times or The Washington Post . However, we must be careful when we see the temptation to equate education with solutions provided by algorithms. There are widespread implications for the advancement of AI to the point where a computer can serve as a personalized tutor able to guide and manage students’ learning and engagement. This opens to the worrying possibility to see a superficial, but profitable, approach where teaching is replaced by AI automated solutions. Especially as we are at a point where we need to find a new pedagogical philosophy that can help students achieve the set of skills required in the twenty-first century for a balanced civic, economic, and social life. We have a new world that is based on uncertainty and challenges that change at a rapid pace, and all this requires creativity, flexibility, the capacity to use and adapt to uncertain contexts. Graduates have to act in a world of value conflicts, information limitations, vast registers of risks, and radical uncertainty. All this, along with the ongoing possibility of staying within personal and group ‘bubbles’ of and being exposed to vast operations of manipulation require a new thinking about the use of technology in education and a new set of graduate attributes. As advanced as AI solutions may become we cannot yet envisage a future where algorithms can really replace the complexity of human mind. For certain, current developments show that it is highly unlikely to happen in the next decade, despite a shared excessive optimism. The AI hype is not yet double by results; for example, Ruchir Puri, the Chief Architect of Watson, IBM’s AI supercomputer, recently noted that “There is a lot of hype around AI, but what it can’t do is very big right now. What it can do is very small.”

This reality may encourage policy-makers and experts to reimagine institutions of higher education in an entirely new paradigm, much more focused on imagination, creativity, and civic engagement. With the capacity to guide learning, monitor participation, and student engagement with the content, AI can customize the ‘feed’ of information and materials into the course according to learner’s needs, provide feedback and encouragement. However, teachers can use this to prepare students to a world of hyper-complexity where the future is not reduced to the simple aim of ‘employability.’ Teacherbots are already presenting as a disruptive alternative to traditional teaching staff, but it is very important to inquire at this point how do we use them for the benefit of students in the context of a profound rethink of what is currently labeled as ‘graduate attributes’ (Mason et al. 2016 ).

Even if in 2017 we find little and exploration of what is a teacherbot and what their capabilities are possible now and in a predictable future, AI technology has slipped into the backdoor of all our lives and this is imposing a much more focused research in higher education. AI solutions are currently monitoring our choices, preferences, movements, measuring strengths, and weaknesses, providing feedback, encouragement, badges, comparative analytics, customized news feeds, alerts, predictive text, so they are project managing our lives. At this point, we can see a teacherbot as a complex algorithmic interface able to use artificial intelligence for personalized education, able to provide bespoke content, supervision, and guidance for students and help for teachers. Teacherbots are defined as any machine-based software or hardware that assumes the role traditionally performed by a teacher assistant in organizing information and providing fast answers to a wide set of predictable questions; it can be facilitating, monitoring, assessing, and managing student learning within the online learning space. These solutions are closer than many academics may think. Tinkering with the old system of transmitting information to passive students, in class or in front of computers, is open to disruption from a highly personalized, scaleable, and affordable alternative AI solutions, such as ‘Jill Watson.’ While contact time and personal guidance by faculty may be should be retained not only in some elite institutions of higher education, as this will define the quality of education, but intelligent machines can be used by all to meet the learning and support needs of massive numbers of students.

The rise of AI makes it impossible to ignore a serious debate about its future role of teaching and learning in higher education and what type of choices universities will make in regard to this issue. The fast pace of technology innovation and the associated job displacement, acknowledged widely by experts in the field (source), implies that teaching in higher education requires a reconsideration of teachers’ role and pedagogies. The current use of technological solutions such as ‘learning management systems’ or IT solutions to detect plagiarism already raise the question of who sets the agenda for teaching and learning: corporate ventures or institutions of higher education? The rise of techlords and the quasi-monopoly of few tech giants also come with questions regarding the importance of privacy and the possibility of a dystopian future. These issues deserve a special attention as universities should include this set of risks when thinking about a sustainable future.

Moreover, many sets of tasks that are currently placed at the core of teaching practice in higher education will be replaced by AI software based on complex algorithms designed by programmers that can transmit their own biases or agendas in operating systems. An ongoing critique and inquiry in proposed solutions stay critical to guarantee that universities remain institutions able to maintain civilization, promote, and develop knowledge and wisdom.

In effect, now is the time for universities to rethink their function and pedagogical models and their future relation with AI solutions and their owners. Furthermore, institutions of higher education see ahead the vast register of possibilities and challenges opened by the opportunity to embrace AI in teaching and learning. These solutions present new openings for education for all, while fostering lifelong learning in a strengthened model that can preserve the integrity of core values and the purpose of higher education.

We consider that there is a need for research on the ethical implications of the current control on developments of AI and the possibility to wither the richness of human knowledge and perspectives with the monopoly of few entities. We also believe that it is important to focus further research on the new roles of teachers on new learning pathways for higher degree students, with a new set of graduate attributes, with a focus on imagination, creativity, and innovation; the set of abilities and skills that can hardly be ever replicated by machines.

Andrea, K, Holz, EM, Sellers, EW, Vaughan, TM. (2015). Toward independent home use of brain-computer interfaces: a decision algorithm for selection of potential end-users. Archives of Physical Medicine and Rehabilitation , 96 (3), S27–S32. doi: 10.1016/j.apmr.2014.03.036 .

Article   Google Scholar  

Andrews, S, Bare, L, Bentley, P, Goedegebuure, L, Pugsley, C, Rance, B (2016). Contingent academic employment in Australian universities . Melbourne: LH Martin Institute. http://www.lhmartininstitute.edu.au/documents/publications/2016-contingent-academic-employment-in-australian-universities-updatedapr16.pdf . Accessed 26 Aug 2017.

Bayne, S. (2015). Teacherbot: interventions in automated teaching. Teaching in Higher Education , 20 (4). doi: 10.1080/13562517.2015.1020783 .

Bostrom, N. (2006). AI set to exceed human brain power. CNN Science & Space. http://edition.cnn.com/2006/TECH/science/07/24/ai.bostrom/ . Accessed 10 Mar 2017.

Bostrom, N, & Yudkowsky, E (2011). The ethics of artificial intelligence. In K Frankish, WM Ransey (Eds.), Cambridge handbook of artificial intelligence , (pp. 316–334). Cambridge, UK: Cambridge University Press.

Google Scholar  

Botrel, L, Holz, EM, Kübler, A. (2015). Brain painting V2: evaluation of P300-based brain-computer interface for creative expression by an end-user following the user-centered design. Brain-Computer Interfaces , 2 (2–3),1–15.

Chen, X, Wang, Y, Nakanishi, M, Gao, X, Jung, TP, Gao, S. (2015). High-speed spelling with a noninvasive brain–computer interface. Proceedings of the National Academy of Sciences , 112 (44), E6058–E6067.

De Lange, C. (2015). Welcome to the bionic dawn. New Scientist , 227 (3032), 24–25.

Deakin University (2014). IBM Watson now powering Deakin. A new partnership that aims to exceed students’ needs. http://archive.li/kEnXm . Accessed 30 Oct 2016.

DFKI (2015). Intelligent Solutions for the Knowledge Society. The German Research Center for Artificial Intelligence. http://www.dfki.de/web?set_language=en&cl=en . Accessed 22 Nov 2016.

Diss, K. (2015). Driverless trucks move iron ore at automated Rio Tinto mines ABC, October 18. http://www.abc.net.au/news/2015-10-18/rio-tinto-opens-worlds-first-automated-mine/6863814 . Accessed 9 Apr 2017.

Gallagher, P. (2015). The University of Warwick launches new department to employ all temporary or fixed-term teaching staff. The Independent, 24 September 2015. http://www.independent.co.uk/news/education/education-news/the-university-of-warwick-launches-new-department-to-employ-all-temporary-or-fixed-term-teaching-10160384.html . Accessed 1 May 2017.

Gibney, E. (2017). Google secretly tested AI bot. Nature , 541 (7636), 142. https://doi.org/10.1038/nature.2017.21253 .

González, VM, Robbes, R, Góngora, G, Medina, S (2015). Measuring concentration while programming with low-cost BCI devices: differences between debugging and creativity tasks. In Foundations of augmented cognition, (pp. 605–615). Los Angeles, CA: Springer International Publishing.

Grove, J. (2015). TeachHigher ‘disbanded’ ahead of campus protest. Times Higher Education, 2 June 2015. https://www.timeshighereducation.com/news/teachhigher-disbanded-ahead-campus-protest . Accessed 28 Apr 2017.

Hillier, P., Wright, B. and Damen, P. (2015). Readiness for self-driving vehicles in Australia. http://advi.org.au/wp-content/uploads/2016/04/Workshop-Report-Readiness-for-Self-Driving-Vehicles-in-Australia.pdf . Accessed 17 May 2017.

Kübler, A, Holz, EM, Sellers, EW, Vaughan, TM. (2015). Toward independent home use of brain-computer interfaces: a decision algorithm for selection of potential end-users. Archives of Physical Medicine and Rehabilitation , 96 (3), S27–S32.

Lazarus, SS, Thurlow, ML, Lail, KE, Christensen, L. (2008). A longitudinal analysis of state accommodations policies: twelve years of change, 1993-2005. The Journal Of Special Education , 43 (2), 67–80. doi: 10.1177/0022466907313524 .

Luckin, R. (2017). Towards artificial intelligence-based assessment systems. Nature Human Behaviour , 1 (0028). doi: 10.1038/s41562-016-0028 .

Maderer, J. (2016). Artificial intelligence course creates AI teaching assistant. Georgia Tech News Center, 9 May 2016. http://www.news.gatech.edu/2016/05/09/artificial-intelligence-course-creates-ai-teaching-assistant . Accessed 28 Aug 2017.

Mason, J, Khan, K, Smith, S (2016). Literate, numerate, discriminate—realigning 21st century skills. In W Chen et al. (Eds.), Proceedings of the 24th international conference on computers in education, (pp. 609–614). Mumbai: Asia-Pacific Society for Computers in Education.

Mitcham, C (2005). Encyclopedia of science, technology, and ethics . Detroit: Macmillan Reference USA.

Neven, H. (2013). Launching the quantum artificial intelligence lab. Google Research Blog, December 8, 2015. http://googleresearch.blogspot.com.au/2013/05/launching-quantum-artificial.html . Accessed 30 Dec 2016.

Neven, H. (2015). When can quantum annealing win? Google Research Blog, 8 December 2015. http://googleresearch.blogspot.com.au/2015/12/when-can-quantum-annealing-win.html . Accessed 30 Dec 2016.

Pasquale, F (2015). The black box society. The secret algorithms that control money and information . Cambridge, Mass: Harvard University Press.

Book   Google Scholar  

Perez, S. (2016). Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism. TechCrunch, Mar 24, 2016. https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/ . Accessed 26 Aug 2017.

Popenici, S (2015). Deceptive promises: the meaning of MOOCs-hype for higher education. In E McKay, J Lenarcic (Eds.), Macro-level learning through massive open online courses (MOOCs): strategies and predictions for the future . Hershey: IGI Global.

Popenici, S, & Kerr, S (2013). What undermines higher education , And how this impacts employment, economies and our democracies (). Charleston: CreateSpace.

Reuters/ABC (2016). Tesla crash: man who died in autopilot collision filmed previous near-miss, praised car’s technology. ABC News, 2 Jul 2016. http://www.abc.net.au/news/2016-07-01/tesla-driver-killed-while-car-was-in-on-autopilot/7560126 . Accessed Aug 2017.

Russell, SJ, & Norvig, P (2010). Artificial intelligence: a modern approach , (3rd ed., ). Upper Saddle River: Prentice-Hall.

Rutkin, A. (2015). Therapist in my pocket. New Scientist , 227 (3038), 20.

Schleicher, A (2015). Schools for 21st-century learners: Strong leaders, confident teachers, innovative approaches , International summit on the teaching profession (). Paris: OECD Publishing.

Tsur, O., Davidov, D. and Rappoport, A. (2010). Semi-supervised recognition of sarcastic sentences in Twitter and Amazon. Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pp. 107-116. Uppsala: Association for Computational Linguistics.

U.S. National Science and Technology Council (2016). National Artificial Intelligence Research and development strategic plan . Washington DC: Networking and Information Technology Research and Development Subcommittee.

Wolpaw, JR, & Wolpaw, EW (2012). Brain-computer interfaces: something new under the sun. In Wolpaw, Wolpaw (Eds.), Brain-computer interfaces: principles and practice , (pp. 3–12). New York: Oxford University Press.

Chapter   Google Scholar  

Download references

Author information

Authors and affiliations.

Office of Learning and Teaching, Charles Darwin University, Casuarina Campus, Orange 1.2.15, Ellengowan Drive, Darwin, Northern Territory, 0909, Australia

Stefan A. D. Popenici

Global Access Project, HECG Higher Education Consulting Group, Level 11 10 Bridge Street, Sydney, New South Wales, 2000, Australia

Sharon Kerr

You can also search for this author in PubMed   Google Scholar

Contributions

SP conceived the study and carried out the research and data analysis, designing the sequence alignment, coordination and conclusion. SK participated in drafting the manuscript and analysed future trends and directions for further research related to this study. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Stefan A. D. Popenici .

Ethics declarations

Authors’ information.

Dr. Stefan Popenici is working at Charles Darwin University as Senior Lecturer in Higher Education and is an Honorary Fellow of the Melbourne Graduate School of Education at the University of Melbourne. He is also Associate Director of the Imaginative Education Research Group at Simon Fraser University, Canada. He is an academic with extensive work experience in teaching and learning, governance, research, training, and academic development with universities in Europe, North America, South East Asia, New Zealand, and Australia. Dr. Popenici was a Senior Advisor of Romania’s Minister of Education on educational reform and academic research, a Senior Consultant of the President of De La Salle University Philippines on scholarship and research, and Expert Consultant for various international institutions in education (e.g., Fulbright Commission, Council of Europe). For his exceptional contributions to education and research and strategic leadership, the President of Romania knighted Stefan in the Order “Merit of Education.”

Sharon Kerr is CEO of Global Access Project, PhD candidate with University of Sydney and Executive member for ODLAA.

Since 1992 Sharon has worked in the area of technology enhanced learning. Sharon’s focus has been on equity issues associated with access to education.

As CEO of Global Access Project , Sharon works with major technology players including IBM and NUANCE in association with major universities in the USA, EU, and Canada with the Liberated Learning Consortium. Their focus is to provide information and solutions so that students with a disability can access the full learning experience and achieve their full potential.

Competing interests

The authors declare that they have no competing interests. SP conceived the study and carried out the research and data analysis, designing the sequence alignment, coordination and conclusion. SK participated in drafting the manuscript and analysed future trends and directions for further research related to this study.Both authors read and approved the final manuscript.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Popenici, S.A.D., Kerr, S. Exploring the impact of artificial intelligence on teaching and learning in higher education. RPTEL 12 , 22 (2017). https://doi.org/10.1186/s41039-017-0062-8

Download citation

Received : 01 December 2016

Accepted : 31 October 2017

Published : 23 November 2017

DOI : https://doi.org/10.1186/s41039-017-0062-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Higher education
  • Artificial intelligence
  • Teacherbots
  • Augmentation
  • Machine learning
  • Graduate attributes

technology intelligence thesis

Artificial Intelligence

Completed Theses

State space search solves navigation tasks and many other real world problems. Heuristic search, especially greedy best-first search, is one of the most successful algorithms for state space search. We improve the state of the art in heuristic search in three directions.

In Part I, we present methods to train neural networks as powerful heuristics for a given state space. We present a universal approach to generate training data using random walks from a (partial) state. We demonstrate that our heuristics trained for a specific task are often better than heuristics trained for a whole domain. We show that the performance of all trained heuristics is highly complementary. There is no clear pattern, which trained heuristic to prefer for a specific task. In general, model-based planners still outperform planners with trained heuristics. But our approaches exceed the model-based algorithms in the Storage domain. To our knowledge, only once before in the Spanner domain, a learning-based planner exceeded the state-of-the-art model-based planners.

A priori, it is unknown whether a heuristic, or in the more general case a planner, performs well on a task. Hence, we trained online portfolios to select the best planner for a task. Today, all online portfolios are based on handcrafted features. In Part II, we present new online portfolios based on neural networks, which receive the complete task as input, and not just a few handcrafted features. Additionally, our portfolios can reconsider their choices. Both extensions greatly improve the state-of-the-art of online portfolios. Finally, we show that explainable machine learning techniques, as the alternative to neural networks, are also good online portfolios. Additionally, we present methods to improve our trust in their predictions.

Even if we select the best search algorithm, we cannot solve some tasks in reasonable time. We can speed up the search if we know how it behaves in the future. In Part III, we inspect the behavior of greedy best-first search with a fixed heuristic on simple tasks of a domain to learn its behavior for any task of the same domain. Once greedy best-first search expanded a progress state, it expands only states with lower heuristic values. We learn to identify progress states and present two methods to exploit this knowledge. Building upon this, we extract the bench transition system of a task and generalize it in such a way that we can apply it to any task of the same domain. We can use this generalized bench transition system to split a task into a sequence of simpler searches.

In all three research directions, we contribute new approaches and insights to the state of the art, and we indicate interesting topics for future work.

Greedy best-first search (GBFS) is a sibling of A* in the family of best-first state-space search algorithms. While A* is guaranteed to find optimal solutions of search problems, GBFS does not provide any guarantees but typically finds satisficing solutions more quickly than A*. A classical result of optimal best-first search shows that A* with admissible and consistent heuristic expands every state whose f-value is below the optimal solution cost and no state whose f-value is above the optimal solution cost. Theoretical results of this kind are useful for the analysis of heuristics in different search domains and for the improvement of algorithms. For satisficing algorithms a similarly clear understanding is currently lacking. We examine the search behavior of GBFS in order to make progress towards such an understanding.

We introduce the concept of high-water mark benches, which separate the search space into areas that are searched by GBFS in sequence. High-water mark benches allow us to exactly determine the set of states that GBFS expands under at least one tie-breaking strategy. We show that benches contain craters. Once GBFS enters a crater, it has to expand every state in the crater before being able to escape.

Benches and craters allow us to characterize the best-case and worst-case behavior of GBFS in given search instances. We show that computing the best-case or worst-case behavior of GBFS is NP-complete in general but can be computed in polynomial time for undirected state spaces.

We present algorithms for extracting the set of states that GBFS potentially expands and for computing the best-case and worst-case behavior. We use the algorithms to analyze GBFS on benchmark tasks from planning competitions under a state-of-the-art heuristic. Experimental results reveal interesting characteristics of the heuristic on the given tasks and demonstrate the importance of tie-breaking in GBFS.

Classical planning tackles the problem of finding a sequence of actions that leads from an initial state to a goal. Over the last decades, planning systems have become significantly better at answering the question whether such a sequence exists by applying a variety of techniques which have become more and more complex. As a result, it has become nearly impossible to formally analyze whether a planning system is actually correct in its answers, and we need to rely on experimental evidence.

One way to increase trust is the concept of certifying algorithms, which provide a witness which justifies their answer and can be verified independently. When a planning system finds a solution to a problem, the solution itself is a witness, and we can verify it by simply applying it. But what if the planning system claims the task is unsolvable? So far there was no principled way of verifying this claim.

This thesis contributes two approaches to create witnesses for unsolvable planning tasks. Inductive certificates are based on the idea of invariants. They argue that the initial state is part of a set of states that we cannot leave and that contains no goal state. In our second approach, we define a proof system that proves in an incremental fashion that certain states cannot be part of a solution until it has proven that either the initial state or all goal states are such states.

Both approaches are complete in the sense that a witness exists for every unsolvable planning task, and can be verified efficiently (in respect to the size of the witness) by an independent verifier if certain criteria are met. To show their applicability to state-of-the-art planning techniques, we provide an extensive overview how these approaches can cover several search algorithms, heuristics and other techniques. Finally, we show with an experimental study that generating and verifying these explanations is not only theoretically possible but also practically feasible, thus making a first step towards fully certifying planning systems.

Heuristic search with an admissible heuristic is one of the most prominent approaches to solving classical planning tasks optimally. In the first part of this thesis, we introduce a new family of admissible heuristics for classical planning, based on Cartesian abstractions, which we derive by counterexample-guided abstraction refinement. Since one abstraction usually is not informative enough for challenging planning tasks, we present several ways of creating diverse abstractions. To combine them admissibly, we introduce a new cost partitioning algorithm, which we call saturated cost partitioning. It considers the heuristics sequentially and uses the minimum amount of costs that preserves all heuristic estimates for the current heuristic before passing the remaining costs to subsequent heuristics until all heuristics have been served this way.

In the second part, we show that saturated cost partitioning is strongly influenced by the order in which it considers the heuristics. To find good orders, we present a greedy algorithm for creating an initial order and a hill-climbing search for optimizing a given order. Both algorithms make the resulting heuristics significantly more accurate. However, we obtain the strongest heuristics by maximizing over saturated cost partitioning heuristics computed for multiple orders, especially if we actively search for diverse orders.

The third part provides a theoretical and experimental comparison of saturated cost partitioning and other cost partitioning algorithms. Theoretically, we show that saturated cost partitioning dominates greedy zero-one cost partitioning. The difference between the two algorithms is that saturated cost partitioning opportunistically reuses unconsumed costs for subsequent heuristics. By applying this idea to uniform cost partitioning we obtain an opportunistic variant that dominates the original. We also prove that the maximum over suitable greedy zero-one cost partitioning heuristics dominates the canonical heuristic and show several non-dominance results for cost partitioning algorithms. The experimental analysis shows that saturated cost partitioning is the cost partitioning algorithm of choice in all evaluated settings and it even outperforms the previous state of the art in optimal classical planning.

Classical planning is the problem of finding a sequence of deterministic actions in a state space that lead from an initial state to a state satisfying some goal condition. The dominant approach to optimally solve planning tasks is heuristic search, in particular A* search combined with an admissible heuristic. While there exist many different admissible heuristics, we focus on abstraction heuristics in this thesis, and in particular, on the well-established merge-and-shrink heuristics.

Our main theoretical contribution is to provide a comprehensive description of the merge-and-shrink framework in terms of transformations of transition systems. Unlike previous accounts, our description is fully compositional, i.e. can be understood by understanding each transformation in isolation. In particular, in addition to the name-giving merge and shrink transformations, we also describe pruning and label reduction as such transformations. The latter is based on generalized label reduction, a new theory that removes all of the restrictions of the previous definition of label reduction. We study the four types of transformations in terms of desirable formal properties and explain how these properties transfer to heuristics being admissible and consistent or even perfect. We also describe an optimized implementation of the merge-and-shrink framework that substantially improves the efficiency compared to previous implementations.

Furthermore, we investigate the expressive power of merge-and-shrink abstractions by analyzing factored mappings, the data structure they use for representing functions. In particular, we show that there exist certain families of functions that can be compactly represented by so-called non-linear factored mappings but not by linear ones.

On the practical side, we contribute several non-linear merge strategies to the merge-and-shrink toolbox. In particular, we adapt a merge strategy from model checking to planning, provide a framework to enhance existing merge strategies based on symmetries, devise a simple score-based merge strategy that minimizes the maximum size of transition systems of the merge-and-shrink computation, and describe another framework to enhance merge strategies based on an analysis of causal dependencies of the planning task.

In a large experimental study, we show the evolution of the performance of merge-and-shrink heuristics on planning benchmarks. Starting with the state of the art before the contributions of this thesis, we subsequently evaluate all of our techniques and show that state-of-the-art non-linear merge-and-shrink heuristics improve significantly over the previous state of the art.

Admissible heuristics are the main ingredient when solving classical planning tasks optimally with heuristic search. Higher admissible heuristic values are more accurate, so combining them in a way that dominates their maximum and remains admissible is an important problem.

The thesis makes three contributions in this area. Extensions to cost partitioning (a well-known heuristic combination framework) allow to produce higher estimates from the same set of heuristics. The new heuristic family called operator-counting heuristics unifies many existing heuristics and offers a new way to combine them. Another new family of heuristics called potential heuristics allows to cast the problem of finding a good heuristic as an optimization problem.

Both operator-counting and potential heuristics are closely related to cost partitioning. They offer a new look on cost partitioned heuristics and already sparked research beyond their use as classical planning heuristics.

Master's theses

Optimal planning is an ongoing topic of research, and requires efficient heuristic search algorithms. One way of calculating such heuristics is through the use of Linear Programs (LPs) and solvers thereof. This thesis investigates the efficiency of LP-based heuristic search strategies of different heuristics, focusing on how different LP solving strategies and solver settings impact the performance of calculating these heuristics. Using the Fast Downward planning system and a comprehensive benchmark set of planning tasks, we conducted a series of experiments to determine the effectiveness of the primal and dual simplex methods and the primal-dual logarithmic barrier method. Our results show that the choice of the LP solver and the application of specific solver settings influence the efficiency of calculating the required heuristics, and showed that the default setting of CPLEX is not optimal in some cases and can be enhanced by specifying an LP-solver or using other non-default solver settings. This thesis lays the groundwork for future research of using different LP solving algorithms and solver settings in the context of LP-based heuristic search in optimal planning.

Classical planning tasks are typically formulated in PDDL. Some of them can be described more concisely using derived variables. Contrary to basic variables, their values cannot be changed by operators and are instead determined by axioms which specify conditions under which they take a certain value. Planning systems often support axioms in their search component, but their heuristics’ support is limited or nonexistent. This leads to decreased search performance with tasks that use axioms. We compile axioms away using our implementation of a known algorithm in the Fast Downward planner. Our results show that the compilation has a negative impact on search performance with its only benefit being the ability to use heuristics that have no axiom support. As a compromise between performance and expressivity, we identify axioms of a simple form and devise a compilation for them. We compile away all axioms in several of the tested domains without a decline in search performance.

The International Planning Competitions (IPCs) serve as a testing suite for planning systems. These domains are well-motivated as they are derived from, or possess characteristics analogous to real-life applications. In this thesis, we study the computational complexity of the plan existence and bounded plan existence decision problems of the following grid-based IPC domains: VisitAll, TERMES, Tidybot, Floortile, and Nurikabe. In all of these domains, there are one or more agents moving through a rectangular grid (potentially with obstacles) performing actions along the way. In many cases, we engineer instances that can be solved only if the movement of the agent or agents follows a Hamiltonian path or cycle in a grid graph. This gives rise to many NP-hardness reductions from Hamiltonian path/cycle problems on grid graphs. In the case of VisitAll and Floortile, we give necessary and sufficient conditions for deciding the plan existence problem in polynomial time. We also show that Tidybot has the game Push -1F as a special case, and its plan existence problem is thus PSPACE-complete. The hardness proofs in this thesis highlight hard instances of these domains. Moreover, by assigning a complexity class to each domain, researchers and practitioners can better assess the strengths and limitations of new and existing algorithms in these domains.

Planning tasks can be used to describe many real world problems of interest. Solving those tasks optimally is thus an avenue of great interest. One established and successful approach for optimal planning is the merge-and-shrink framework, which decomposes the task into a factored transition system. The factors initially represent the behaviour of one state variable and are repeatedly combined and abstracted. The solutions of these abstract states is then used as a heuristic to guide search in the original planning task. Existing merge-and-shrink transformations keep the factored transition system orthogonal, meaning that the variables of the planning task are represented in no more than one factor at any point. In this thesis we introduce the clone transformation, which duplicates a factor of the factored transition system, making it non-orthogonal. We test two classes of clone strategies, which we introduce and implement in the Fast Downward planning system and conclude that, while theoretically promising, our clone strategies are practically inefficient as their performance was worse than state-of-the-art methods for merge-and-shrink.

This thesis aims to present a novel approach for improving the performance of classical planning algorithms by integrating cost partitioning with merge-and-shrink techniques. Cost partitioning is a well-known technique for admissibly adding multiple heuristic values. Merge-and-shrink, on the other hand, is a technique to generate well-informed abstractions. The "merge” part of the technique is based on creating an abstract representation of the original problem by replacing two transition systems with their synchronised product. In contrast, the ”shrink” part refers to reducing the size of the factor. By combining these two approaches, we aim to leverage the strengths of both methods to achieve better scalability and efficiency in solving classical planning problems. Considering a range of benchmark domains and the Fast Downward planning system, the experimental results show that the proposed method achieves the goal of fusing merge and shrink with cost partitioning towards better outcomes in classical planning.

Planning is the process of finding a path in a planning task from the initial state to a goal state. Multiple algorithms have been implemented to solve such planning tasks, one of them being the Property-Directed Reachability algorithm. Property-Directed Reachability utilizes a series of propositional formulas called layers to represent a super-set of states with a goal distance of at most the layer index. The algorithm iteratively improves the layers such that they represent a minimum number of states. This happens by strengthening the layer formulas and therefore excluding states with a goal distance higher than the layer index. The goal of this thesis is to implement a pre-processing step to seed the layers with a formula that already excludes as many states as possible, to potentially improve the run-time performance. We use the pattern database heuristic and its associated pattern generators to make use of the planning task structure for the seeding algorithm. We found that seeding does not consistently improve the performance of the Property-Directed Reachability algorithm. Although we observed a significant reduction in planning time for some tasks, it significantly increased for others.

Certifying algorithms is a concept developed to increase trust by demanding affirmation of the computed result in form of a certificate. By inspecting the certificate, it is possible to determine correctness of the produced output. Modern planning systems have been certifying for long time in the case of solvable instances, where a generated plan acts as a certificate.

Only recently there have been the first steps towards certifying unsolvability judgments in the form of inductive certificates which represent certain sets of states. Inductive certificates are expressed with the help of propositional formulas in a specific formalism.

In this thesis, we investigate the use of propositional formulas in conjunctive normal form (CNF) as a formalism for inductive certificates. At first, we look into an approach that allows us to construct formulas representing inductive certificates in CNF. To show general applicability of this approach, we extend this to the family of delete relaxation heuristics. Furthermore, we present how a planning system is able to generate an inductive validation formula, a single formula that can be used to validate if the set found by the planner is indeed an inductive certificate. At last, we show with an experimental evaluation that the CNF formalism can be feasible in practice for the generation and validation of inductive validation formulas.

In generalized planning the aim is to solve whole classes of planning tasks instead of single tasks one at a time. Generalized representations provide information or knowledge about such classes to help solving them. This work compares the expressiveness of three generalized representations, generalized potential heuristics, policy sketches and action schema networks, in terms of compilability. We use a notion of equivalence that requires two generalized representations to decompose the tasks of a class into the same subtasks. We present compilations between pairs of equivalent generalized representations and proofs where a compilation is impossible.

A Digital Microfluidic Biochip (DMFB) is a digitally controllable lab-on-a-chip. Droplets of fluids are moved, merged and mixed on a grid. Routing these droplets efficiently has been tackled by various different approaches. We try to use temporal planning to do droplet routing, inspired by the use of it in quantum circuit compilation. We test a model for droplet routing in both classical and temporal planning and compare both versions. We show that our classical planning model is an efficient method to find droplet routes on DMFBs. Then we extend our model and include spawning, disposing, merging, splitting and mixing of droplets. The results of these extensions show that we are able to find plans for simple experiments. When scaling the problem size to real life experiments our model fails to find plans.

Cost partitioning is a technique used to calculate heuristics in classical optimal planning. It involves solving a linear program. This linear program can be decomposed into a master and pricing problems. In this thesis we combine Fourier-Motzkin elimination and the double description method in different ways to precompute the generating rays of the pricing problems. We further empirically evaluate these approaches and propose a new method that replaces the Fourier-Motzkin elimination. Our new method improves the performance of our approaches with respect to runtime and peak memory usage.

The increasing number of data nowadays has contributed to new scheduling approaches. Aviation is one of the domains concerned the most, as the aircraft engine implies millions of maintenance events operated by staff worldwide. In this thesis we present a constraint programming-based algorithm to solve the aircraft maintenance scheduling problem. We want to find the best time to do the maintenance by determining which employee will perform the work and when. Here we report how the scheduling process in aviation can be automatized.

To solve stochastic state-space tasks, the research field of artificial intelligence is mainly used. PROST2014 is state of the art when determining good actions in an MDP environment. In this thesis, we aimed to provide a heuristic by using neural networks to outperform the dominating planning system PROST2014. For this purpose, we introduced two variants of neural networks that allow to estimate the respective Q-value for a pair of state and action. Since we envisaged the learning method of supervised learning, in addition to the architecture as well as the components of the neural networks, the generation of training data was also one of the main tasks. To determine the most suitable network parameters, we performed a sequential parameter search, from which we expected a local optimum of the model settings. In the end, the PROST2014 planning system could not be surpassed in the total rating evaluation. Nevertheless, in individual domains, we could establish increased final scores on the side of the neural networks. The result shows the potential of this approach and points to eventual adaptations in future work pursuing this procedure furthermore.

In classical planning, there are tasks that are hard and tasks that are easy. We can measure the complexity of a task with the correlation complexity, the improvability width, and the novelty width. In this work, we compare these measures.

We investigate what causes a correlation complexity of at least 2. To do so we translate the state space into a vector space which allows us to make use of linear algebra and convex cones.

Additionally, we introduce the Basel measure, a new measure that is based on potential heuristics and therefore similar to the correlation complexity but also comparable to the novelty width. We show that the Basel measure is a lower bound for the correlation complexity and that the novelty width +1 is an upper bound for the Basel measure.

Furthermore, we compute the Basel measure for some tasks of the International Planning Competitions and show that the translation of a task can increase the Basel measure by removing seemingly irrelevant state variables.

Unsolvability is an important result in classical planning and has seen increased interest in recent years. This thesis explores unsolvability detection by automatically generating parity arguments, a well-known way of proving unsolvability. The argument requires an invariant measure, whose parity remains constant across all reachable states, while all goal states are of the opposite parity. We express parity arguments using potential functions in the field F 2 . We develop a set of constraints that describes potential functions with the necessary separating property, and show that the constraints can be represented efficiently for up to two-dimensional features. Enhanced with mutex information, an algorithm is formed that tests whether a parity function exists for a given planning task. The existence of such a function proves the task unsolvable. To determine its practical use, we empirically evaluate our approach on a benchmark of unsolvable problems and compare its performance to a state of the art unsolvability planner. We lastly analyze the arguments found by our algorithm to confirm their validity, and understand their expressive power.

We implemented the invariant synthesis algorithm proposed by Rintanen and experimentally compared it against Helmert’s mutex group synthesis algorithm as implemented in Fast Downward.

The context for the comparison is the translation of propositional STRIPS tasks to FDR tasks, which requires the identification of mutex groups.

Because of its dominating lead in translation speed, combined with few and marginal advantages in performance during search, Helmert’s algorithm is clearly better for most uses. Meanwhile Rintanen’s algorithm is capable of finding invariants other than mutexes, which Helmert’s algorithm per design cannot do.

The International Planning Competition (IPC) is a competition of state-of-the-art planning systems. The evaluation of these planning systems is done by measuring them with different problems. It focuses on the challenges of AI planning by analyzing classical, probabilistic and temporal planning and by presenting new problems for future research. Some of the probabilistic domains introduced in IPC 2018 are Academic Advising, Chromatic Dice, Cooperative Recon, Manufacturer, Push Your Luck, Red-finned Blue-eyes, etc.

This thesis aims to solve (near)-optimally two probabilistic IPC 2018 domains, Academic Advising and Chromatic Dice. We use different techniques to solve these two domains. In Academic Advising, we use a relevance analysis to remove irrelevant actions and state variables from the planning task. We then convert the problem from probabilistic to classical planning, which helped us solve it efficiently. In Chromatic Dice, we implement backtracking search to solve the smaller instances optimally. More complex instances are partitioned into several smaller planning tasks, and a near-optimal policy is derived as a combination of the optimal solutions to the small instances.

The motivation for finding (near)-optimal policies is related to the IPC score, which measures the quality of the planners. By providing the optimal upper bound of the domains, we contribute to the stabilization of the IPC score evaluation metric for these domains.

Most well-known and traditional online planners for probabilistic planning are in some way based on Monte-Carlo Tree Search. SOGBOFA, symbolic online gradient-based optimization for factored action MDPs, offers a new perspective on this: it constructs a function graph encoding the expected reward for a given input state using independence assumptions for states and actions. On this function, they use gradient ascent to perform a symbolic search optimizing the actions for the current state. This unique approach to probabilistic planning has shown very strong results and even more potential. In this thesis, we attempt to integrate the new ideas SOGBOFA presents into the traditionally successful Trial-based Heuristic Tree Search framework. Specifically, we design and evaluate two heuristics based on the aforementioned graph and its Q value estimations, but also the search using gradient ascent. We implement and evaluate these heuristics in the Prost planner, along with a version of the current standalone planner.

In this thesis, we consider cyclical dependencies between landmarks for cost-optimal planning. Landmarks denote properties that must hold at least once in all plans. However, if the orderings between them induce cyclical dependencies, one of the landmarks in each cycle must be achieved an additional time. We propose the generalized cycle-covering heuristic which considers this in addition to the cost for achieving all landmarks once.

Our research is motivated by recent applications of cycle-covering in the Freecell and logistics domain where it yields near-optimal results. We carry it over to domain-independent planning using a linear programming approach. The relaxed version of a minimum hitting set problem for the landmarks is enhanced by constraints concerned with cyclical dependencies between them. In theory, this approach surpasses a heuristic that only considers landmarks.

We apply the cycle-covering heuristic in practice where its theoretical dominance is confirmed; Many planning tasks contain cyclical dependencies and considering them affects the heuristic estimates favorably. However, the number of tasks solved using the improved heuristic is virtually unaffected. We still believe that considering this feature of landmarks offers great potential for future work.

Potential heuristics are a class of heuristics used in classical planning to guide a search algorithm towards a goal state. Most of the existing research on potential heuristics is focused on finding heuristics that are admissible, such that they can be used by an algorithm such as A* to arrive at an optimal solution. In this thesis, we focus on the computation of potential heuristics for satisficing planning, where plan optimality is not required and the objective is to find any solution. Specifically, our focus is on the computation of potential heuristics that are descending and dead-end avoiding (DDA), since these prop- erties guarantee favorable search behavior when used with greedy search algorithms such as hillclimbing. We formally prove that the computation of DDA heuristics is a PSPACE-complete problem and propose several approximation algorithms. Our evaluation shows that the resulting heuristics are competitive with established approaches such as Pattern Databases in terms of heuristic quality but suffer from several performance bottlenecks.

Most automated planners use heuristic search to solve the tasks. Usually, the planners get as input a lifted representation of the task in PDDL, a compact formalism describing the task using a fragment of first-order logic. The planners then transform this task description into a grounded representation where the task is described in propositional logic. This new grounded format can be exponentially larger than the lifted one, but many planners use this grounded representation because it is easier to implement and reason about.

However, sometimes this transformation between lifted and grounded representations is not tractable. When this is the case, there is not much that planners based on heuristic search can do. Since this transformation is a required preprocess, when this fails, the whole planner fails.

To solve the grounding problem, we introduce new methods to deal with tasks that cannot be grounded. Our work aims to find good ways to perform heuristic search while using a lifted representation of planning problems. We use the point-of-view of planning as a database progression problem and borrow solutions from the areas of relational algebra and database theory.

Our theoretical and empirical results are motivating: several instances that were never solved by any planner in the literature are now solved by our new lifted planner. For example, our planner can solve the challenging Organic Synthesis domain using a breadth-first search, while state-of-the-art planners cannot solve more than 60% of the instances. Furthermore, our results offer a new perspective and a deep theoretical study of lifted representations for planning tasks.

The generation of independently verifiable proofs for the unsolvability of planning tasks using different heuristics, including linear Merge-and-Shrink heuristics, is possible by usage of a proof system framework. Proof generation in the case of non-linear Merge-and-Shrink heuristic, however, is currently not supported. This is due to the lack of a suitable state set representation formalism that allows to compactly represent states mapped to a certain value in the belonging Merge-and-Shrink representation (MSR). In this thesis, we overcome this shortcoming using Sentential Decision Diagrams (SDDs) as set representations. We describe an algorithm that constructs the desired SDD from the MSR, and show that efficient proof verification is possible with SDDs as representation formalism. Aditionally, we use a proof of concept implementation to analyze the overhead occurred by the proof generation functionality and the runtime of the proof verification.

The operator-counting framework is a framework in classical planning for heuristics that are based on linear programming. The operator-counting framework covers several kinds of state-of-the-art linear programming heuristics, among them the post-hoc optimization heuristic. In this thesis we will use post-hoc optimization constraints and evaluate them under altered cost functions instead of the original cost function of the planning task. We show that such cost-altered post-hoc optimization constraints are also covered by the operator-counting framework and that it is possible to achieve improved heuristic estimates with them, compared with post-hoc optimization constraints under the original cost function. In our experiments we have not been able to achieve improved problem coverage, as we were not able to find a method for generating favorable cost functions that work well in all domains.

Heuristic forward search is the state-of-the-art approach to solve classical planning problems. On the other hand, bidirectional heuristic search has a lot of potential but was never able to deliver on those expectations in practice. Only recently the near-optimal bidirectional search algorithm (NBS) was introduces by Chen et al. and as the name suggests, NBS expands nearly the optimal number of states to solve any search problem. This is a novel achievement and makes the NBS algorithm a very promising and efficient algorithm in search. With this premise in mind, we raise the question of how applicable NBS is to planning. In this thesis, we inquire this very question by implementing NBS in the state- of-the-art planner Fast-Downward and analyse its performance on the benchmark of the latest international planning competition. We additionally implement fractional meet-in- the-middle and computeWVC to analyse NBS’ performance more thoroughly in regards to the structure of the problem task.

The conducted experiments show that NBS can successfully be applied to planning as it was able to consistently outperform A*. Especially good results were achieved on the domains: blocks, driverlog, floortile-opt11-strips, get-opt14-strips, logistics00, and termes- opt18-strips. Analysing these results, we deduce that the efficiency of forward and backward search depends heavily upon the underlying implicit structure of the transition system which is induced by the problem task. This suggests that bidirectional search is inherently more suited for certain problems. Furthermore, we find that this aptitude for a certain search direction correlates with the domain, thereby providing a powerful analytic tool to a priori derive the effectiveness of certain search approaches.

In conclusion, even without intricate improvements the NBS algorithm is able to compete with A*. It therefore has further potential for future research. Additionally, the underlying transition system of a problem instance is shown to be an important factor which influences the efficiency of certain search approaches. This knowledge could be valuable for devising portfolio planners.

Multiple Sequence Alignment (MSA) is the problem of aligning multiple biological sequences in the evoluationary most plausible way. It can be viewed as a shortest path problem through an n-dimensional lattice. Because of its large branching factor of 2^n − 1, it has found broad attention in the artificial intelligence community. Finding a globally optimal solution for more than a few sequences requires sophisticated heuristics and bounding techniques in order to solve the problem in acceptable time and within memory limitations. In this thesis, we show how existing heuristics fall into the category of combining certain pattern databases. We combine arbitrary pattern collections that can be used as heuristic estimates and apply cost partitioning techniques from classical planning for MSA. We implement two of those heuristics for MSA and compare their estimates to the existing heuristics.

Increasing Cost Tree Search is a promising approach to multi-agent pathfinding problems, but like all approaches it has to deal with a huge number of possible joint paths, growing exponentially with the number of agents. We explore the possibility of reducing this by introducing a value abstraction to the Multi-valued Decision Diagrams used to represent sets of joint paths. To that end we introduce a heat map to heuristically judge how collisionprone agent positions are and present how to use and possible refine abstract positions in order to still find valid paths.

Estimating cheapest plan costs with the help of network flows is an established technique. Plans and network flows are already very similar, however network flows can differ from plans in the presence of cycles. If a transition system contains cycles, flows might be composed of multiple disconnected parts. This discrepancy can make the cheapest plan estimation worse. One idea to get rid of the cycles works by introducing time steps. For every time step the states of a transition system are copied. Transitions will be changed, so that they connect states only with states of the next time step, which ensures that there are no cycles. It turned out, that by applying this idea to multiple transitions systems, network flows of the individual transition systems can be synchronized via the time steps to get a new kind of heuristic, that will also be discussed in this thesis.

Probabilistic planning is a research field that has become popular in the early 1990s. It aims at finding an optimal policy which maximizes the outcome of applying actions to states in an environment that feature unpredictable events. Such environments can consist of a large number of states and actions which make finding an optimal policy intractable using classical methods. Using a heuristic function for a guided search allows for tackling such problems. Designing a domain-independent heuristic function requires complex algorithms which may be expensive when it comes to time and memory consumption.

In this thesis, we are applying the supervised learning techniques for learning two domain-independent heuristic functions. We use three types of gradient descent methods: stochastic, batch and mini-batch gradient descent and their improved versions using momen- tum, learning decay rate and early stopping. Furthermore, we apply the concept of feature combination in order to better learn the heuristic functions. The learned functions are pro- vided to Prost, a domain-independent probabilistic planner, and benchmarked against the winning algorithms of the International Probabilistic Planning Competition held in 2014. The experiments show that learning an offline heuristic improves the overall score of the search for some of the domains used in aforementioned competition.

The merge-and-shrink heuristic is a state-of-the-art admissible heuristic that is often used for optimal planning. Recent studies showed that the merge strategy is an important factor for the performance of the merge-and-shrink algorithm. There are many different merge strategies and improvements for merge strategies described in the literature. One out of these merge strategies is MIASM by Fan et al. MIASM tries to merge transition systems that produce unnecessary states in their product which can be pruned. Another merge strategy is the symmetry-based merge-and-shrink framework by Sievers et al. This strategy tries to merge transition systems that cause factored symmetries in their product. This strategy can be combined with other merge strategies and it often improves the performance for many merge strategy. However, the current combination of MIASM with factored symmetries performs worse than MIASM. We implement a different combination of MIASM that uses factored symmetries during the subset search of MIASM. Our experimental evaluation shows that our new combination of MIASM with factored symmetries solves more tasks than the existing MIASM and the previously implemented combination of MIASM with factored symmetries. We also evaluate different combinations of existing merge strategies and find combinations that perform better than their basic version that were not evaluated before.

Tree Cache is a pathfinding algorithm that selects one vertex as a root and constructs a tree with cheapest paths to all other vertices. A path is found by traversing up the tree from both the start and goal vertices to the root and concatenating the two parts. This is fast, but as all paths constructed this way pass through the root vertex they can be highly suboptimal.

To improve this algorithm, we consider two simple approaches. The first is to construct multiple trees, and save the distance to each root in each vertex. To find a path, the algorithm first selects the root with the lowest total distance. The second approach is to remove redundant vertices, i.e. vertices that are between the root and the lowest common ancestor (LCA) of the start and goal vertices. The performance and space requirements of the resulting algorithm are then compared to the conceptually similar hub labels and differential heuristics.

Greedy Best-First Search (GBFS) is a prominent search algorithm to find solutions for planning tasks. GBFS chooses nodes for further expansion based on a distance-to-goal estimator, the heuristic. This makes GBFS highly dependent on the quality of the heuristic. Heuristics often face the problem of producing Uninformed Heuristic Regions (UHRs). GBFS additionally suffers the possibility of simultaneously expanding nodes in multiple UHRs. In this thesis we change the heuristic approach in UHRs. The heuristic was unable to guide the search and so we try to expand novel states to escape the UHRs. The novelty measures how “new” a state is in the search. The result is a combination of heuristic and novelty guided search, which is indeed able to escape UHRs quicker and solve more problems in reasonable time.

In classical AI planning, the state explosion problem is a reoccurring subject: although the problem descriptions are compact, often a huge number of states needs to be considered. One way to tackle this problem is to use static pruning methods which reduce the number of variables and operators in the problem description before planning.

In this work, we discuss the properties and limitations of three existing static pruning techniques with a focus on satisficing planning. We analyse these pruning techniques and their combinations, and identify synergy effects between them and the domains and problem structures in which they occur. We implement the three methods into an existing propositional planner, and evaluate the performance of different configurations and combinations in a set of experiments on IPC benchmarks. We observe that static pruning techniques can increase the number of solved problems, and that the synergy effects of the combinations also occur on IPC benchmarks, although they do not lead to a major performance increase.

The goal of classical domain-independent planning is to find a sequence of actions which lead from a given initial state to a goal state that satisfies some goal criteria. Most planning systems use heuristic search algorithms to find such a sequence of actions. A critical part of heuristic search is the heuristic function. In order to find a sequence of actions from an initial state to a goal state efficiently this heuristic function has to guide the search towards the goal. It is difficult to create such an efficient heuristic function. Arfaee et al. show that it is possible to improve a given heuristic function by applying machine learning techniques on a single domain in the context of heuristic search. To achieve this improvement of the heuristic function, they propose a bootstrap learning approach which subsequently improves the heuristic function.

In this thesis we will introduce a technique to learn heuristic functions that can be used in classical domain-independent planning based on the bootstrap-learning approach introduced by Arfaee et al. In order to evaluate the performance of the learned heuristic functions, we have implemented a learning algorithm for the Fast Downward planning system. The experiments have shown that a learned heuristic function generally decreases the number of explored states compared to blind-search . The total time to solve a single problem increases because the heuristic function has to be learned before it can be applied.

Essential for the estimation of the performance of an algorithm in satisficing planning is its ability to solve benchmark problems. Those results can not be compared directly as they originate from different implementations and different machines. We implemented some of the most promising algorithms for greedy best-first search, published in the last years, and evaluated them on the same set of benchmarks. All algorithms are either based on randomised search, localised search or a combination of both. Our evaluation proves the potential of those algorithms.

Heuristic search with admissible heuristics is the leading approach to cost-optimal, domain-independent planning. Pattern database heuristics - a type of abstraction heuristics - are state-of-the-art admissible heuristics. Two recent pattern database heuristics are the iPDB heuristic by Haslum et al. and the PhO heuristic by Pommerening et al.

The iPDB procedure performs a hill climbing search in the space of pattern collections and evaluates selected patterns using the canonical heuristic. We apply different techniques to the iPDB procedure, improving its hill climbing algorithm as well as the quality of the resulting heuristic. The second recent heuristic - the PhO heuristic - obtains strong heuristic values through linear programming. We present different techniques to influence and improve on the PhO heuristic.

We evaluate the modified iPDB and PhO heuristics on the IPC benchmark suite and show that these abstraction heuristics can compete with other state-of-the-art heuristics in cost-optimal, domain-independent planning.

Greedy best-first search (GBFS) is a prominent search algorithm for satisficing planning - finding good enough solutions to a planning task in reasonable time. GBFS selects the next node to consider based on the most promising node estimated by a heuristic function. However, this behaviour makes GBFS heavily depend on the quality of the heuristic estimator. Inaccurate heuristics can lead GBFS into regions far away from a goal. Additionally, if the heuristic ranks several nodes the same, GBFS has no information on which node it shall follow. Diverse best-first search (DBFS) is a new algorithm by Imai and Kishimoto [2011] which has a local search component to emphasis exploitation. To enable exploration, DBFS deploys probabilities to select the next node.

In two problem domains, we analyse GBFS' search behaviour and present theoretical results. We evaluate these results empirically and compare DBFS and GBFS on constructed as well as on provided problem instances.

State-of-the-art planning systems use a variety of control knowledge in order to enhance the performance of heuristic search. Unfortunately most forms of control knowledge use a specific formalism which makes them hard to combine. There have been several approaches which describe control knowledge in Linear Temporal Logic (LTL). We build upon this work and propose a general framework for encoding control knowledge in LTL formulas. The framework includes a criterion that any LTL formula used in it must fulfill in order to preserve optimal plans when used for pruning the search space; this way the validity of new LTL formulas describing control knowledge can be checked. The framework is implemented on top of the Fast Downward planning system and is tested with a pruning technique called Unnecessary Action Application, which detects if a previously applied action achieved no useful progress.

Landmarks are known to be useable for powerful heuristics for informed search. In this thesis, we explain and evaluate a novel algorithm to find ordered landmarks of delete free tasks by intersecting solutions in the relaxation. The proposed algorithm efficiently finds landmarks and natural orders of delete free tasks, such as delete relaxations or Pi-m compilations.

Planning as heuristic search is the prevalent technique to solve planning problems of any kind of domains. Heuristics estimate distances to goal states in order to guide a search through large state spaces. However, this guidance is sometimes moderate, since still a lot of states lie on plateaus of equally prioritized states in the search space topology. Additional techniques that ignore or prefer some actions for solving a problem are successful to support the search in such situations. Nevertheless, some action pruning techniques lead to incomplete searches.

We propose an under-approximation refinement framework for adding actions to under-approximations of planning tasks during a search in order to find a plan. For this framework, we develop a refinement strategy. Starting a search on an initial under-approximation of a planning task, the strategy adds actions determined at states close to a goal, whenever the search does not progress towards a goal, until a plan is found. Key elements of this strategy consider helpful actions and relaxed plans for refinements. We have implemented the under-approximation refinement framework into the greedy best first search algorithm. Our results show considerable speedups for many classical planning problems. Moreover, we are able to plan with fewer actions than standard greedy best first search.

The main approach for classical planning is heuristic search. Many cost heuristics are based on the delete relaxation. The optimal heuristic of a delete free planning problem is called h + . This thesis explores two new ways to compute h + . Both approaches use factored planning, which decomposes the original planning problem to work on each subproblem separately. The algorithm reuses the subsolutions and combines them to a global solution.

The two algorithms are used to compute a cost heuristic for an A* search. As both approaches compute the optimal heuristic for delete free planning tasks, the algorithms can also be used to find a solution for relaxed planning tasks.

Multi-Agent-Path-Finding (MAPF) is a common problem in robotics and memory management. Pebbles in Motion is an implementation of a problem solver for MAPF in polynomial time, based on a work by Daniel Kornhauser from 1984. Recently a lot of research papers have been published on MAPF in the research community of Artificial Intelligence, but the work by Kornhauser seems hardly to be taken into account. We assumed that this might be related to the fact that said paper was more mathematically and hardly describing algorithms intuitively. This work aims at filling this gap, by providing an easy understandable approach of implementation steps for programmers and a new detailed description for researchers in Computer Science.

Bachelor's theses

Fast Downward is a classical planner using heuristical search. The planner uses many advanced planning techniques that are not easy to teach, since they usually rely on complex data structures. To introduce planning techniques to the user an interactive application is created. This application uses an illustrative example to showcase planning techniques: Blocksworld

Blocksworld is an easy understandable planning problem which allows a simple representation of a state space. It is implemented in the Unreal Engine and provides an interface to the Fast Downward planner. Users can explore a state space themselves or have Fast Downward generate plans for them. The concept of heuristics as well as the state space are explained and made accessible to the user. The user experiences how the planner explores a state space and which techniques the planner uses.

This thesis is about implementing Jussi Rintanen’s algorithm for schematic invariants. The algo- rithm is implemented in the planning tool Fast Downward and refers to Rintanen’s paper Schematic Invariants by Reduction to Ground Invariants. The thesis describes all necessary definitions to under- stand the algorithm and draws a comparison between the original task and a reduced task in terms of runtime and number of grounded actions.

Planning is a field of Artificial Intelligence. Planners are used to find a sequence of actions, to get from the initial state to a goal state. Many planning algorithms use heuristics, which allow the planner to focus on more promising paths. Pattern database heuristics allow us to construct such a heuristic, by solving a simplified version of the problem, and saving the associated costs in a pattern database. These pattern databases can be computed and stored by using symbolic data structures.

In this paper we will look at how pattern databases using symbolic data structures using binary decision diagrams and algebraic decision diagrams can be implemented. We will extend fast down- ward (Helmert [2006]) with it, and compare the performance of this implementation with the already implemented explicit pattern database.

In the field of automated planning and scheduling, a planning task is essentially a state space which can be defined rigorously using one of several different formalisms (e.g. STRIPS, SAS+, PDDL etc.). A planning algorithm tries to determine a sequence of actions that lead to a goal state for a given planning task. In recent years, attempts have been made to group certain planners together into so called planner portfolios, to try and leverage their effectiveness on different specific problem classes. In our project, we create an online planner which in contrast to its offline counterparts, makes use of task specific information when allocating a planner to a task. One idea that has recently gained interest, is to apply machine learning methods to planner portfolios.

In previous work such as Delfi (Katz et al., 2018; Sievers et al., 2019a) supervised learning techniques were used, which made it necessary to train multiple networks to be able to attempt multiple, potentially different, planners for a given task. The reason for this being that, if we used the same network, the output would always be the same, as the input to the network would remain unchanged. In this project we make use of techniques from rein- forcement learning such as DQNs (Mnih et al., 2013). Using RL approaches such as DQNs, allows us to extend the input to the network to include information on things, such as which planners were previously attempted and for how long. As a result multiple attempts can be made after only having trained a single network.

Unfortunately the results show that current reinforcement learning agents are, amongst other reasons, too sample inefficient to be able to deliver viable results given the size of the currently available data sets.

Planning tasks are important and difficult problems in computer science. A widely used approach is the use of delete relaxation heuristics to which the additive and FF heuristic belong. Those two heuristics use a graph in their calculation, which only has to be constructed once for a planning task but then can be used repeatedly. To solve such a problem efficiently it is important that the calculation of the heuristics are fast. In this thesis the idea to achieve a faster calculation is to combine redundant parts of the graph when building it to reduce the number of edges and therefore speed up the calculation. Here the reduction of the redundancies is done for each action within a planning task individually, but further ideas to simplify over all actions are also discussed.

Monte Carlo search methods are widely known, mostly for their success in game domains, although they are also applied to many non-game domains. In previous work done by Schulte and Keller, it was established that best-first searches could adapt to the action selection functionality which make Monte Carlo methods so formidable. In practice however, the trial-based best first search, without exploration, was shown to be slightly slower than its explicit open list counterpart. In this thesis we examine the non-trial and trial-based searches and how they can address the exploitation exploration dilemma. Lastly, we will see how trial-based BFS can rectify a slower search by allowing occasional random action selection, by comparing it to regular open list searches in a line of experiments.

Sudoku has become one of the world’s most popular logic puzzles, arousing interest in the general public and the science community. Although the rules of Sudoku may seem simple, they allow for nearly countless puzzle instances, some of which are very hard to solve. SAT-solvers have proven to be a suitable option to solve Sudokus automatically. However, they demand the puzzles to be encoded as logical formulae in Conjunctive Normal Form. In earlier work, such encodings have been successfully demonstrated for original Sudoku Puzzles. In this thesis, we present encodings for rather unconventional Sudoku Variants, developed by the puzzle community to create even more challenging solving experiences. Furthermore, we demonstrate how Pseudo-Boolean Constraints can be utilized to encode Sudoku Variants that follow rules involving sums. To implement an encoding of Pseudo-Boolean Constraints, we use Binary Decision Diagrams and Adder Networks and study how they compare to each other.

In optimal classical planning, informed search algorithms like A* need admissible heuristics to find optimal solutions. Counterexample-guided abstraction refinement (CEGAR) is a method used to generate abstractions that yield suitable abstraction heuristics iteratively. In this thesis, we propose a class of CEGAR algorithms for the generation of domain abstractions, which are a class of abstractions that rank in between projections and Cartesian abstractions regarding the grade of refinement they allow. As no known algorithm constructs domain abstractions, we show that our algorithm is competitive with CEGAR algorithms that generate one projection or Cartesian abstraction.

This thesis will look at Single-Player Chess as a planning domain using two approaches: one where we look at how we can encode the Single-Player Chess problem as a domain-independent (general-purpose AI) approach and one where we encode the problem as a domain-specific solver. Lastly, we will compare the two approaches by doing some experiments and comparing the results of the two approaches. Both the domain-independent implementation and the domain-specific implementation differ from traditional chess engines because the task of the agent is not to find the best move for a given position and colour, but the agent’s task is to check if a given chess problem has a solution or not. If the agent can find a solution, the given chess puzzle is valid. The results of both approaches were measured in experiments, and we found out that the domain-independent implementation is too slow and that the domain-specific implementation, on the other hand, can solve the given puzzles reliably, but it has a memory bottleneck rooted in the search method that was used.

Carcassonne is a tile-based board game with a large state space and a high branching factor and therefore poses a challenge to artificial intelligence. In the past, Monte Carlo Tree Search (MCTS), a search algorithm for sequential decision-making processes, has been shown to find good solutions in large state spaces. MCTS works by iteratively building a game tree according to a tree policy. The profitability of paths within that tree is evaluated using a default policy, which influences in what directions the game tree is expanded. The functionality of these two policies, as well as other factors, can be implemented in many different ways. In consequence, many different variants of MCTS exist. In this thesis, we applied MCTS to the domain of two-player Carcassonne and evaluated different variants in regard to their performance and runtime. We found significant differences in performance for various variable aspects of MCTS and could thereby evaluate a configuration which performs best on the domain of Carcassonne. This variant consistently outperformed an average human player with a feasible runtime.

In general, it is important to verify software as it is prone to error. This also holds for solving tasks in classical planning. So far, plans in general as well as the fact that there is no plan for a given planning task can be proven and independently verified. However, no such proof for the optimality of a solution of a task exists. Our aim is to introduce two methods with which optimality can be proven and independently verified. We first reduce unit cost tasks to unsolvable tasks, which enables us to make use of the already existing certificates for unsolvability. In a second approach, we propose a proof system for optimality, which enables us to infer that the determined cost of a task is optimal. This permits the direct generation of optimality certificates.

Pattern databases are one of the most powerful heuristics in classical planning. They evaluate the perfect cost for a simplified sub-problem. The post-hoc optimization heuristic is a technique on how to optimally combine a set of pattern databases. In this thesis, we will adapt the post-hoc optimization heuristic for the sliding tile puzzle. The sliding tile puzzle serves as a benchmark to compare the post-hoc optimization heuristic to already established methods, which also deal with the combining of pattern databases. We will then show how the post-hoc optimization heuristic is an improvement over the already established methods.

In this thesis, we generate landmarks for a logistics-specific task. Landmarks are actions that need to occur at least once in every plan. A landmark graph denotes a structure with landmarks and their edges called orderings. If there are cycles in a landmark graph, one of those landmarks needs to be achieved at least twice for every cycle. The generation of the logistics-specific landmarks and their orderings calculate the cyclic landmark heuristic. The task is to pick up on related work, the evaluation of the cyclic landmark heuristic. We compare the generation of landmark graphs from a domain-independent landmark generator to a domain-specific landmark generator, the latter being the focus. We aim to bridge the gap between domain-specific and domain-independent landmark generators. In this thesis, we compare one domain-specific approach for the logistics domain with results from a domain- independent landmark generator. We devise a unit to pre-process data for other domain- specific tasks as well. We will show that specificity is better suited than independence.

Lineare Programmierung ist eine mathematische Modellierungstechnik, bei der eine lineare Funktion, unter der Berücksichtigung verschiedenen Beschränkungen, maximiert oder minimiert werden soll. Diese Technik ist besonders nützlich, falls Entscheidungen für Optimierungsprobleme getroffen werden sollen. Ziel dieser Arbeit war es ein Tool für das Spiel Factory Town zu entwickeln, mithilfe man Optimierungsanfragen bearbeiten kann. Dabei ist es möglich wahlweise zwischen diversen Fragestellungen zu wählen und anhand von LP-\ IP-Solvern diese zu beantworten. Zudem wurden die mathematischen Formulierungen, sowie die Unterschiede beider Methoden angegangen. Schlussendlich unterstrichen die generierten Resultate, dass LP Lösungen mindestens genauso gut oder sogar besser seien als die Lösungen eines IP.

Symbolic search is an important approach to classical planning. Symbolic search uses search algorithms that process sets of states at a time. For this we need states to be represented by a compact data structure called knowledge compilations. Merge-and-shrink representations come a different field of planning, where they have been used to derive heuristic functions for state-space search. More generally they represent functions that map variable assignments to a set of values, as such we can regard them as a data structure we will call Factored Mappings. In this thesis, we will investigate Factored Mappings (FMs) as a knowledge compilation language with the hope of using them for symbolic search. We will analyse the necessary transformations and queries for FMs, by defining the needed operations and a canonical representation of FMs, and showing that they run in polynomial time. We will then show that it is possible to use Factored Mappings as a knowledge compilation for symbolic search by defining a symbolic search algorithm for a finite-domain plannings task that works with FMs.

Version control systems use a graph data structure to track revisions of files. Those graphs are mutated with various commands by the respective version control system. The goal of this thesis is to formally define a model of a subset of Git commands which mutate the revision graph, and to model those mutations as a planning task in the Planning Domain Definition Language. Multiple ways to model those graphs will be explored and those models will be compared by testing them using a set of planners.

Pattern Databases are admissible abstraction heuristics for classical planning. In this thesis we are introducing the Boosting processes, which consists of enlarging the pattern of a Pattern Database P, calculating a more informed Pattern Database P' and then min-compress P' to the size of P resulting in a compressed and still admissible Pattern Database P''. We design and implement two boosting algorithms, Hillclimbing and Randomwalk.

We combine pattern database heuristics using five different cost partitioning methods. The experiments compare computing cost partitionings over regular and boosted pattern databases. The experiments, performed on IPC (optimal track) tasks, show promising results which increased the coverage (number of solved tasks) by 9 for canonical cost partitioning using our Randomwalk boosting variant.

One dimensional potential heuristics assign a numerical value, the potential, to each fact of a classical planning problem. The heuristic value of a state is the sum over the poten- tials belonging to the facts contained in the state. Fišer et al. (2020) recently proposed to strengthen potential heuristics utilizing mutexes and disambiguations. In this thesis, we embed the same enhancements in the planning system Fast Downward. The experi- mental evaluation shows that the strengthened potential heuristics are a refinement, but too computationally expensive to solve more problems than the non-strengthened potential heuristics.

The potentials are obtained with a Linear Program. Fišer et al. (2020) introduced an additional constraint on the initial state and we propose additional constraints on random states. The additional constraints improve the amount of solved problems by up to 5%.

This thesis discusses the PINCH heuristic, a specific implementation of the additive heuristic. PINCH intends to combine the strengths of existing implementations of the additive heuristic. The goal of this thesis is to really dig into the PINCH heuristic. I want to provide the most accessible resource for understanding PINCH and I want to analyze the performance of PINCH by comparing it to the algorithm on which it is based, Generalized Dijkstra.

Suboptimal search algorithms can offer attractive benefits compared to optimal search, namely increased coverage of larger search problems and quicker search times. Improving on such algorithms, such as reducing costs further towards optimal solutions and reducing the number of node expansions, is therefore a compelling area for further research. This paper explores the utility and scalability of recently developed priority functions, XDP, XUP, and PWXDP, and the Improved Optimistic Search algorithm, compared to Weighted A*, in the Fast Downward planner. Analyses focus on the cost, total time, coverage, and node expansion parameters, with experimental evidence suggesting preferable performance if strict optimality is not desired. The implementation of priorityb functions in eager best-first search showed marked improvements compared to A* search on coverage, total time, and number of expansions, without significant cost penalties. Following previous suboptimal search research, experimental evidence even seems to indicate that these cost penalties do not reach the designated bound, even in larger search spaces.

In the Automated Planning field, algorithms and systems are developed for exploring state spaces and ultimately finding an action sequence leading from a task’s initial state to its goal. Such planning systems may sometimes show unexpected behavior, caused by a planning task or a bug in the planner itself. Generally speaking, finding the source of a bug tends to be easier when the cause can be isolated or simplified. In this thesis, we tackle this problem by making PDDL and SAS+ tasks smaller while ensuring they still invoke a certain characteristic when executed with a planner. We implement a system that successively removes elements, such as objects, from a task and checks whether the transformed task still fails on the planner. Elements are removed in a syntactically consistent way, however, no semantic integrity is enforced. Our system’s design is centered around the Fast Downward Planning System, as we re-use some of its translator modules and all test runs are performed with Fast Downward. At the core of our system, first-choice hill-climbing is used for optimization. Our “minimizer” takes (1) a failing planner execution command, (2) a description of the failing characteristic and (3) the type of element to be deleted as arguments. We evaluate our system’s functionality on the basis of three use-cases. In our most successful test runs, (1) a SAS+ task with initially 1536 operators and 184 variables is reduced to 2 operators and 2 variables and (2)a PDDL task with initially 46 actions, 62 objects and 29 predicate symbols is reduced to 2 actions, 6 objects and 4 predicates.

Fast Downward is a classical planning system based on heuristic search. Its successor generator is an efficient and intelligent tool to process state spaces and generate their successor states. In this thesis we implement different successor generators in the Fast Downward planning system and compare them against each other. Apart from the given fast downward successor generator we implement four other successor generators: a naive successor generator, one based on the marking of delete relaxed heuristics, one based on the PSVN planning system and one based on watched literals as used in modern SAT solvers. These successor generators are tested in a variety of different planning benchmarks to see how well they compete against each other. We verified that there is a trade-off between precomputation and faster successor generation and showed that all of the implemented successor generators have a use case and it is advisable to switch to a successor generator that fits the style of the planning task.

Verifying whether a planning algorithm came to the correct result for a given planning task is easy if a plan is emitted which solves the problem. But if a task is unsolvable most planners just state this fact without any explanation or even proof. In this thesis we present extended versions of the symbolic search algorithms SymPA and symbolic bidirectional uniform-cost search which, if a given planning task is unsolvable, provide certificates which prove unsolvability. We also discuss a concrete implementation of this version of SymPA.

Classical planning is an attractive approach to solving problems because of its generality and its relative ease of use. Domain-specific algorithms are appealing because of their performance, but require a lot of resources to be implemented. In this thesis we evaluate concepts languages as a possible input language for expert domain knowledge into a planning system. We also explore mixed integer programming as a way to use this knowledge to improve search efficiency and to help the user find and refine useful domain knowledge.

Classical Planning is a branch of artificial intelligence that studies single agent, static, deterministic, fully observable, discrete search problems. A common challenge in this field is the explosion of states to be considered when searching for the goal. One technique that has been developed to mitigate this is Strong Stubborn Set based pruning, where on each state expansion, the considered successors are restricted to Strong Stubborn Sets, which exploit the properties of independent operators to cut down the tree or graph search. We adopt the definitions of the theory of Strong Stubborn Sets from the SAS+ setting to transition systems and validate a central theorem about the correctness of Strong Stubborn Set based pruning for transition systems in the interactive theorem prover Isabelle/HOL.

Ein wichtiges Feld in der Wissenschaft der künstliche Intelligenz sind Planungsprobleme. Man hat das Ziel, eine künstliche intelligente Maschine zu bauen, die mit so vielen ver- schiedenen Probleme umgehen und zuverlässig lösen kann, indem sie ein optimaler Plan herstellt.

Der Trial-based Heuristic Tree Search(THTS) ist ein mächtiges Werkzeug um Multi-Armed- Bandit-ähnliche Probleme, Marcow Decsision Processe mit verändernden Rewards, zu lösen. Beim momentanen THTS können explorierte gefundene gute Rewards auf Grund von der grossen Anzahl der Rewards nicht beachtet werden. Ebenso können beim explorieren schlech- te Rewards, gute Knoten im Suchbaum, verschlechtern. Diese Arbeit führt eine Methodik ein, die von der stückweise stationären MABs Problematik stammt, um den THTS weiter zu optimieren.

Abstractions are a simple yet powerful method of creating a heuristic to solve classical planning problems optimally. In this thesis we make use of Cartesian abstractions generated with Counterexample-Guided Abstraction Refinement (CEGAR). This method refines abstractions incrementally by finding flaws and then resolving them until the abstraction is sufficiently evolved. The goal of this thesis is to implement and evaluate algorithms which select solutions of such flaws, in a way which results in the best abstraction (that is, the abstraction which causes the problem to then be solved most efficiently by the planner). We measure the performance of a refinement strategy by running the Fast Downward planner on a problem and measuring how long it takes to generate the abstraction, as well as how many expansions the planner requires to find a goal using the abstraction as a heuristic. We use a suite of various benchmark problems for evaluation, and we perform this experiment for a single abstraction and on abstractions for multiple subtasks. Finally, we attempt to predict which refinement strategy should be used based on parameters of the task, potentially allowing the planner to automatically select the best strategy at runtime.

Heuristic search is a powerful paradigm in classical planning. The information generated by heuristic functions to guide the search towards a goal is a key component of many modern search algorithms. The paper “Using Backwards Generated Goals for Heuristic Planning” by Alcázar et al. proposes a way to make additional use of this information. They take the last actions of a relaxed plan as a basis to generate intermediate goals with a known path to the original goal. A plan is found when the forward search reaches an intermediate goal.

The premise of this thesis is to modify their approach by focusing on a single sequence of intermediate goals. The aim is to improve efficiency while preserving the benefits of backwards goal expansion. We propose different variations of our approach by introducing multiple ways to make decisions concerning the construction of intermediate goals. We evaluate these variations by comparing their performance and illustrate the challenges posed by this approach.

Counterexample-guided abstraction refinement (CEGAR) is a way to incrementally compute abstractions of transition systems. It starts with a coarse abstraction and then iteratively finds an abstract plan, checks where the plan fails in the concrete transition system and refines the abstraction such that the same failure cannot happen in subsequent iterations. As the abstraction grows in size, finding a solution for the abstract system becomes more and more costly. Because the abstraction grows incrementally, however, it is possible to maintain heuristic information about the abstract state space, allowing the use of informed search algorithms like A*. As the quality of the heuristic is crucial to the performance of informed search, the method for maintaining the heuristic has a significant impact on the performance of the abstraction refinement as a whole. In this thesis, we investigate different methods for maintaining the value of the perfect heuristic h* at all times and evaluate their performance.

Pattern Databases are a powerful class of abstraction heuristics which provide admissible path cost estimates by computing exact solution costs for all states of a smaller task. Said task is obtained by abstracting away variables of the original problem. Abstractions with few variables offer weak estimates, while introduction of additional variables is guaranteed to at least double the amount of memory needed for the pattern database. In this thesis, we present a class of algorithms based on counterexample-guided abstraction refinement (CEGAR), which exploit additivity relations of patterns to produce pattern collections from which we can derive heuristics that are both informative and computationally tractable. We show that our algorithms are competitive with already existing pattern generators by comparing their performance on a variety of planning tasks.

We consider the problem of Rubik’s Cube to evaluate modern abstraction heuristics. In order to find feasible abstractions of the enormous state space spanned by Rubik’s Cube, we apply projection in the form of pattern databases, Cartesian abstraction by doing counterexample guided abstraction refinement as well as merge-and-shrink strategies. While previous publications on Cartesian abstractions have not covered applicability for planning tasks with conditional effects, we introduce factorized effect tasks and show that Cartesian abstraction can be applied to them. In order to evaluate the performance of the chosen heuristics, we run experiments on different problem instances of Rubik’s Cube. We compare them by the initial h-value found for all problems and analyze the number of expanded states up to the last f-layer. These criteria provide insights about the informativeness of the considered heuristics. Cartesian Abstraction yields perfect heuristic values for problem instances close to the goal, however it is outperformed by pattern databases for more complex instances. Even though merge-and-shrink is the most general abstraction among the considered, it does not show better performance than the others.

Probabilistic planning expands on classical planning by tying probabilities to the effects of actions. Due to the exponential size of the states, probabilistic planners have to come up with a strong policy in a very limited time. One approach to optimising the policy that can be found in the available time is called metareasoning, a technique aiming to allocate more deliberation time to steps where more time to plan results in an improvement of the policy and less deliberation time to steps where an improvement of the policy with more time to plan is unlikely.

This thesis aims to adapt a recent proposal of a formal metareasoning procedure from Lin. et al. for the search algorithm BRTDP to work with the UCT algorithm in the Prost planner and compare its viability to the current standard and a number of less informed time management methods in order to find a potential improvement to the current uniform deliberation time distribution.

A planner tries to produce a policy that leads to a desired goal given the available range of actions and an initial state. A traditional approach for an algorithm is to use abstraction. In this thesis we implement the algorithm described in the ASAP-UCT paper: Abstraction of State-Action Pairs in UCT by Ankit Anand, Aditya Grover, Mausam and Parag Singla.

The algorithm combines state and state-action abstraction with a UCT-algorithm. We come to the conclusion that the algorithm needs to be improved because the abstraction of action-state often cannot detect a similarity that a reasonable action abstraction could find.

The notion of adding a form of exploration to guide a search has been proven to be an effective method of combating heuristical plateaus and improving the performance of greedy best-first search. The goal of this thesis is to take the same approach and introduce exploration in a bounded suboptimal search problem. Explicit estimation search (EES), established by Thayer and Ruml, consults potentially inadmissible information to determine the search order. Admissible heuristics are then used to guarantee the cost bound. In this work we replace the distance-to-go estimator used in EES with an approach based on the concept of novelty.

Classical domain-independent planning is about finding a sequence of actions which lead from an initial state to a goal state. A popular approach for solving planning problems efficiently is to utilize heuristic functions. A possible heuristic function is the perfect heuristic of a delete relaxed planning problem denoted as h+. Delete relaxation simplifies the planning problem thus making it easier to find a perfect heuristic. However computing h+ is still NP-hard problem.

In this thesis we discuss a promising looking approach to compute h+ in practice. Inspired by the paper from Gnad, Hoffmann and Domshlak about star-shaped planning problems, we implemented the Flow-Cut algorithm. The basic idea behind flow-cut to divide a problem that is unsolvable in practice, into smaller sub problems that can be solved. We further tested the flow-cut algorithm on the domains provided by the International Planning Competition benchmarks, resulting in the following conclusion: Using a divide and conquer approach can successfully be used to solve classical planning problems, however it is not trivial to design such an algorithm to be more efficient than state-of-the-art search algorithm.

This thesis deals with the algorithm presented in the paper "Landmark-based Meta Best-First Search Algorithm: First Parallelization Attempt and Evaluation" by Simon Vernhes, Guillaume Infantes and Vincent Vidal. Their idea was to reconsider the approach to landmarks as a tool in automated planning, but in a markedly different way than previous work had done. Their result is a meta-search algorithm which explores landmark orderings to find a series of subproblems that reliably lead to an effective solution. Any complete planner may be used to solve the subproblems. While the referenced paper also deals with an attempt to effectively parallelize the Landmark-based Meta Best-First Search Algorithm, this thesis is concerned mainly with the sequential implementation and evaluation of the algorithm in the Fast Downward planning system.

Heuristics play an important role in classical planning. Using heuristics during state space search often reduces the time required to find a solution, but constructing heuristics and using them to calculate heuristic values takes time, reducing this benefit. Constructing heuristics and calculating heuristic values as quickly as possible is very important to the effectiveness of a heuristic. In this thesis we introduce methods to bound the construction of merge-and-shrink to reduce its construction time and increase its accuracy for small problems and to bound the heuris- tic calculation of landmark cut to reduce heuristic value calculation time. To evaluate the performance of these depth-bound heuristics we have implemented them in the Fast Down- ward planning system together with three iterative-deepening heuristic search algorithms: iterative-deepening A* search, a new breadth-first iterative-deepening version of A* search and iterative-deepening breadth-first heuristic search.

Greedy best-first search has proven to be a very efficient approach to satisficing planning but can potentially lose some of its effectiveness due to the used heuristic function misleading it to a local minimum or plateau. This is where exploration with additional open lists comes in, to assist greedy best-first search with solving satisficing planning tasks more effectively. Building on the idea of exploration by clustering similar states together as described by Xie et al. [2014], where states are clustered according to heuristic values, we propose in this paper to instead cluster states based on the Hamming distance of the binary representation of states [Hamming, 1950]. The resulting open list maintains k buckets and inserts each given state into the bucket with the smallest average hamming distance between the already clustered states and the new state. Additionally, our open list is capable of reclustering all states periodically with the use of the k-means algorithm. We were able to achieve promising results concerning the amount of expansions necessary to reach a goal state, despite not achieving a higher coverage than fully random exploration due to slow performance. This was caused by the amount of calculations required to identify the most fitting cluster when inserting a new state.

Monte Carlo Tree Search Algorithms are an efficient method of solving probabilistic planning tasks that are modeled by Markov Decision Problems. MCTS uses two policies, a tree policy for iterating through the known part of the decission tree and a default policy to simulate the actions and their reward after leaving the tree. MCTS algorithms have been applied with great success to computer Go. To make the two policies fast many enhancements based on online knowledge have been developed. The goal of All Moves as First enhancements is to improve the quality of a reward estimate in the tree policy. In the context of this thesis the, in the field of computer Go very efficient, α-AMAF, Cutoff-AMAF as well as Rapid Action Value Estimation enhancements are implemented in the probabilistic planner PROST. To obtain a better default policy, Move Average Sampling is implemented into PROST and benchmarked against it’s current default policies.

In classical planning the objective is to find a sequence of applicable actions that lead from the initial state to a goal state. In many cases the given problem can be of enormous size. To deal with these cases, a prominent method is to use heuristic search, which uses a heuristic function to evaluate states and can focus on the most promising ones. In addition to applying heuristics, the search algorithm can apply additional pruning techniques that exclude applicable actions in a state because applying them at a later point in the path would result in a path consisting of the same actions but in a different order. The question remains as to how these actions can be selected without generating too much additional work to still be useful for the overall search. In this thesis we implement and evaluate the partition-based path pruning method, proposed by Nissim et al. [1], which tries to decompose the set of all actions into partitions. Based on this decomposition, actions can be pruned with very little additional information. The partition-based pruning method guarantees with some alterations to the A* search algorithm to preserve it’s optimality. The evaluation confirms that in several standard planning domains, the pruning method can reduce the size of the explored state space.

Validating real-time systems is an important and complex task which becomes exponentially harder with increasing sizes of systems. Therefore finding an automated approach to check real-time systems for possible errors is crucial. The behaviour of such real-time systems can be modelled with timed automata. This thesis adapts and implements the under-approximation refinement algorithm developed for search based planners proposed by Heusner et al. to find error states in timed automata via the directed model checking approach. The evaluation compares the algorithm to already existing search methods and shows that a basic under-approximation refinement algorithm yields a competitive search method for directed model checking which is both fast and memory efficient. Additionally we illustrate that with the introduction of some minor alterations the proposed under- approximation refinement algorithm can be further improved.

In dieser Arbeit wird versucht eine Heuristik zu lernen. Damit eine Heuristik erlernbar ist, muss sie über Parameter verfügen, die die Heuristik bestimmen. Eine solche Möglichkeit bieten Potential-Heuristiken und ihre Parameter werden Potentiale genannt. Pattern-Databases können mit vergleichsweise wenig Aufwand Eigenschaften eines Zustandsraumes erkennen und können somit eingesetzt werden als Grundlage um Potentiale zu lernen. Diese Arbeit untersucht zwei verschiedene Ansätze zum Erlernen der Potentiale aufgrund der Information aus Pattern-Databases. In Experimenten werden die beiden Ansätze genauer untersucht und schliesslich mit der FF-Heuristik verglichen.

We consider real-time strategy (RTS) games which have temporal and numerical aspects and pose challenges which have to be solved within limited search time. These games are interesting for AI research because they are more complex than board games. Current AI agents cannot consistently defeat average human players, while even the best players make mistakes we think an AI could avoid. In this thesis, we will focus on StarCraft Brood War. We will introduce a formal definition of the model Churchill and Buro proposed for StarCraft. This allows us to focus on Build Order optimization only. We have implemented a base version of the algorithm Churchill and Buro used for their agent. Using the implementation we are able to find solutions for Build Order Problems in StarCraft Brood War.

Auf dem Gebiet der Handlungsplanung stellt die symbolische Suche eine der erfolgversprechendsten angewandten Techniken dar. Um eine symbolische Suche auf endlichen Zustandsräumen zu implementieren bedarf es einer geeigneten Datenstruktur für logische Formeln. Diese Arbeit erprobt die Nutzung von Sentential Decision Diagrams (SDDs) anstelle der gängigen Binary Decision Diagrams (BDDs) zu diesem Zweck. SDDs sind eine Generalisierung von BDDs. Es wird empirisch getestet wie eine Implementierung der symbolischen Suche mit SDDs im FastDownward-Planer sich mit verschiedenen vtrees unterscheidet. Insbesondere wird die Performance von balancierten vtrees, mit welchen die Stärken von SDDs oft gut zur Geltung kommen, mit rechtsseitig linearen vtrees verglichen, bei welchen sich SDDs wie BDDs verhalten.

Die Frage ob es gültige Sudokus - d.h. Sudokus mit nur einer Lösung - gibt, die nur 16 Vorgaben haben, konnte im Dezember 2011 mithilfe einer erschöpfenden Brute-Force-Methode von McGuire et al. verneint werden. Die Schwierigkeit dieser Aufgabe liegt in dem ausufernden Suchraum des Problems und der dadurch entstehenden Erforderlichkeit einer effizienten Beweisidee sowie schnellerer Algorithmen. In dieser Arbeit wird die Beweismethode von McGuire et al. bestätigt werden und für 2 2 × 2 2 und 3 2 × 3 2 Sudokus in C++ implementiert.

Das Finden eines kürzesten Pfades zwischen zwei Punkten ist ein fundamentales Problem in der Graphentheorie. In der Praxis ist es oft wichtig, den Ressourcenverbrauch für das Ermitteln eines solchen Pfades minimal zu halten, was mithilfe einer komprimierten Pfaddatenbank erreicht werden kann. Im Rahmen dieser Arbeit bestimmen wir drei Verfahren, mit denen eine Pfaddatenbank möglichst platzsparend aufgestellt werden kann, und evaluieren die Effektivität dieser Verfahren anhand von Probleminstanzen verschiedener Grösse und Komplexität.

In planning what we want to do is to get from an initial state into a goal state. A state can be described by a finite number of boolean valued variables. If we want to transition from one state to the other we have to apply an action and this, at least in probabilistic planning, leads to a probability distribution over a set of possible successor states. From each transition the agent gains a reward dependent on the current state and his action. In this setting the growth of the number of possible states is exponential with the number of variables. We assume that the value of these variables is determined for each variable independently in a probabilistic fashion. So these variables influence the number of possible successor states in the same way as they did the state space. In consequence it is almost impossible to obtain an optimal amount of reward approaching this problem with a brute force technique. One way to get past this problem is to abstract the problem and then solve a simplified version of the aforementioned. That’s in general the idea proposed by Boutilier and Dearden [1]. They have introduced a method to create an abstraction which depends on the reward formula and the dependencies contained in the problem. With this idea as a basis we’ll create a heuristic for a trial-based heuristic tree search (THTS) algorithm [5] and a standalone planner using the framework PROST (Keller and Eyerich, 2012). These will then be tested on all the domains of the International Probabilistic Planning Competition (IPPC).

In einer Planungsaufgabe geht es darum einen gegebenen Wertezustand durch sequentielles Anwenden von Aktionen in einen Wertezustand zu überführen, welcher geforderte Zieleigenschaften erfüllt. Beim Lösen von Planungsaufgaben zählt Effizienz. Um Zeit und Speicher zu sparen verwenden viele Planer heuristische Suche. Dabei wird mittels einer Heuristik abgeschätzt, welche Aktion als nächstes angewendet werden soll um möglichst schnell in einen gewünschten Zustand zu gelangen.

In dieser Arbeit geht es darum, die von Haslum vorgeschlagene P m -Kompilierung für Planungsaufgaben zu implementieren und die h max -Heuristik auf dem kompilierten Problem gegen die h m -Heuristik auf dem originalen Problem zu testen. Die Implementation geschieht als Ergänzung zum Fast-Downward-Planungssystem. Die Resultate der Tests zeigen, dass mittels der Kompilierung die Zahl der gelösten Probleme erhöht werden kann. Das Lösen eines kompilierten Problems mit der h max -Heuristik geschieht im allgemeinen mit selbiger Informationstiefe schneller als das Lösen des originalen Problems mit der h m -Heuristik. Diesen Zeitgewinn erkauft man sich mit einem höheren Speicherbedarf.

The objective of classical planning is to find a sequence of actions which begins in a given initial state and ends in a state that satisfies a given goal condition. A popular approach to solve classical planning problems is based on heuristic forward search algorithms. In contrast, regression search algorithms apply actions “backwards” in order to find a plan from a goal state to the initial state. Currently, regression search algorithms are somewhat unpopular, as the generation of partial states in a basic regression search often leads to a significant growth of the explored search space. To tackle this problem, state subsumption is a pruning technique that additionally discards newly generated partial states for which a more general partial state has already been explored.

In this thesis, we discuss and evaluate techniques of regression and state subsumption. In order to evaluate their performance, we have implemented a regression search algorithm for the planning system Fast Downward, supporting both a simple subsumption technique as well as a refined subsumption technique using a trie data structure. The experiments have shown that a basic regression search algorithm generally increases the number of explored states compared to uniform-cost forward search. Regression with pruning based on state subsumption with a trie data structure significantly reduces the number of explored states compared to basic regression.

This thesis discusses the Traveling Tournament Problem and how it can be solved with heuristic search. The Traveling Tournament problem is a sports scheduling problem where one tries to find a schedule for a league that meets certain constraints while minimizing the overall distance traveled by the teams in this league. It is hard to solve for leagues with many teams involved since its complexity grows exponentially in the number of teams. The largest instances solved up to date, are instances with leagues of up to 10 teams.

Previous related work has shown that it is a reasonable approach to solve the Traveling Tournament Problem with an IDA*-based tree search. In this thesis I implemented such a search and extended it with several enhancements to examine whether they improve performance of the search. The heuristic that I used in my implementation is the Independent Lower Bound heuristic. It tries to find lower bounds to the traveling costs of each team in the considered league. With my implementation I was able to solve problem instances with up to 8 teams. The results of my evaluation have mostly been consistent with the expected impact of the implemented enhancements on the overall performance.

One huge topic in Artificial Intelligence is the classical planning. It is the process of finding a plan, therefore a sequence of actions that leads from an initial state to a goal state for a specified problem. In problems with a huge amount of states it is very difficult and time consuming to find a plan. There are different pruning methods that attempt to lower the amount of time needed to find a plan by trying to reduce the number of states to explore. In this work we take a closer look at two of these pruning methods. Both of these methods rely on the last action that led to the current state. The first one is the so called tunnel pruning that is a generalisation of the tunnel macros that are used to solve Sokoban problems. The idea is to find actions that allow a tunnel and then prune all actions that are not in the tunnel of this action. The second method is the partition-based path pruning. In this method all actions are distributed into different partitions. These partitions then can be used to prune actions that do not belong to the current partition.

The evaluation of these two pruning methods show, that they can reduce the number of explored states for some problem domains, however the difference between pruned search and normal search gets smaller when we use heuristic functions. It also shows that the two pruning rules effect different problem domains.

Ziel klassischer Handlungsplanung ist es auf eine möglichst effiziente Weise gegebene Planungsprobleme zu lösen. Die Lösung bzw. der Plan eines Planungsproblems ist eine Sequenz von Operatoren mit denen man von einem Anfangszustand in einen Zielzustand gelangt. Um einen Zielzustand gezielter zu finden, verwenden einige Suchalgorithmen eine zusätzliche Information über den Zustandsraum - die Heuristik. Sie schätzt, ausgehend von einem Zustand den Abstand zum Zielzustand. Demnach wäre es ideal, wenn jeder neue besuchte Zustand einen kleineren heuristischen Wert aufweisen würde als der bisher besuchte Zustand. Es gibt allerdings Suchszenarien bei denen die Heuristik nicht weiterhilft um einem Ziel näher zu kommen. Dies ist insbesondere dann der Fall, wenn sich der heuristische Wert von benachbarten Zuständen nicht ändert. Für die gierige Bestensuche würde das bedeuten, dass die Suche auf Plateaus und somit blind verläuft, weil sich dieser Suchalgorithmus ausschliesslich auf die Heuristik stützt. Algorithmen, die die Heuristik als Wegweiser verwenden, gehören zur Klasse der heuristischen Suchalgorithmen.

In dieser Arbeit geht es darum, in Fällen wie den Plateaus trotzdem eine Orientierung im Zustandsraum zu haben, indem Zustände neben der Heuristik einer weiteren Priorisierung unterliegen. Die hier vorgestellte Methode nutzt Abhängigkeiten zwischen Operatoren aus und erweitert die gierige Bestensuche. Wie stark Operatoren voneinander abhängen, betrachten wir anhand eines Abstandsmasses, welches vor der eigentlichen Suche berechnet wird. Die grundlegende Idee ist, Zustände zu bevorzugen, deren Operatoren im Vorfeld voneinander profitierten. Die Heuristik fungiert hierbei erst im Nachhinein als Tie-Breaker, sodass wir einem vielversprechenden Pfad zunächst folgen können, ohne dass uns die Heuristik an einer anderen, weniger vielversprechenden Stelle suchen lässt.

Die Ergebnisse zeigen, dass unser Ansatz in der reinen Suchzeit je nach Heuristik performanter sein kann, als wenn man sich ausschliesslich auf die Heuristik stützt. Bei sehr informationsreichen Heuristiken kann es jedoch passieren, dass die Suche durch unseren Ansatz eher gestört wird. Zudem werden viele Probleme nicht gelöst, weil die Berechnung der Abstände zu zeitaufwändig ist.

In classical planning, heuristic search is a popular approach to solving problems very efficiently. The objective of planning is to find a sequence of actions that can be applied to a given problem and that leads to a goal state. For this purpose, there are many heuristics. They are often a big help if a problem has a solution, but what happens if a problem does not have one? Which heuristics can help proving unsolvability without exploring the whole state space? How efficient are they? Admissible heuristics can be used for this purpose because they never overestimate the distance to a goal state and are therefore able to safely cut off parts of the search space. This makes it potentially easier to prove unsolvability

In this project we developed a problem generator to automatically create unsolvable problem instances and used those generated instances to see how different admissible heuristics perform on them. We used the Japanese puzzle game Sokoban as the first problem because it has a high complexity but is still easy to understand and to imagine for humans. As second problem, we used a logistical problem called NoMystery because unlike Sokoban it is a resource constrained problem and therefore a good supplement to our experiments. Furthermore, unsolvability occurs rather 'naturally' in these two domains and does not seem forced.

Sokoban is a computer game where each level consists of a two-dimensional grid of fields. There are walls as obstacles, moveable boxes and goal fields. The player controls the warehouse worker (Sokoban in Japanese) to push the boxes to the goal fields. The problem is very complex and that is why Sokoban has become a domain in planning.

Phase transitions mark a sudden change in solvability when traversing through the problem space. They occur in the region of hard instances and have been found for many domains. In this thesis we investigate phase transitions in the Sokoban puzzle. For our investigation we generate and evaluate random instances. We declare the defining parameters for Sokoban and measure their influence on the solvability. We show that phase transitions in the solvability of Sokoban can be found and their occurrence is measured. We attempt to unify the parameters of Sokoban to get a prediction on the solvability and hardness of specific instances.

In planning, we address the problem of automatically finding a sequence of actions that leads from a given initial state to a state that satisfies some goal condition. In satisficing planning, our objective is to find plans with preferably low, but not necessarily the lowest possible costs while keeping in mind our limited resources like time or memory. A prominent approach for satisficing planning is based on heuristic search with inadmissible heuristics. However, depending on the applied heuristic, plans found with heuristic search might be of low quality, and hence, improving the quality of such plans is often desirable. In this thesis, we adapt and apply iterative tunneling search with A* (ITSA*) to planning. ITSA* is an algorithm for plan improvement which has been originally proposed by Furcy et al. for search problems. ITSA* intends to search the local space of a given solution path in order to find "short cuts" which allow us to improve our solution. In this thesis, we provide an implementation and systematic evaluation of this algorithm on the standard IPC benchmarks. Our results show that ITSA* also successfully works in the planning area.

In action planning, greedy best-first search (GBFS) is one of the standard techniques if suboptimal plans are accepted. GBFS uses a heuristic function to guide the search towards a goal state. To achieve generality, in domain-independant planning the heuristic function is generated automatically. A well-known problem of GBFS are search plateaus, i.e., regions in the search space where all states have equal heuristic values. In such regions, heuristic search can degenerate to uninformed search. Hence, techniques to escape from such plateaus are desired to improve the efficiency of the search. A recent approach to avoid plateaus is based on diverse best-first search (DBFS) proposed by Imai and Kishimoto. However, this approach relies on several parameters. This thesis presents an implementation of DBFS into the Fast Downward planner. Furthermore, this thesis presents a systematic evaluation of DBFS for several parameter settings, leading to a better understanding of the impact of the parameter choices to the search performance.

Risk is a popular board game where players conquer each other's countries. In this project, I created an AI that plays Risk and is capable of learning. For each decision it makes, it performs a simple search one step ahead, looking at the outcomes of all possible moves it could make, and picks the most beneficial. It judges the desirability of outcomes by a series of parameters, which are modified after each game using the TD(λ)-Algorithm, allowing the AI to learn.

The Canadian Traveler's Problem ( ctp ) is a path finding problem where due to unfavorable weather, some of the roads are impassable. At the beginning, the agent does not know which roads are traversable and which are not. Instead, it can observe the status of roads adjacent to its current location. We consider the stochastic variant of the problem, where the blocking status of a connection is randomly defined with known probabilities. The goal is to find a policy which minimizes the expected travel costs of the agent.

We discuss several properties of the stochastic ctp and present an efficient way to calculate state probabilities. With the aid of these theoretical results, we introduce an uninformed algorithm to find optimal policies.

Finding optimal solutions for general search problems is a challenging task. A powerful approach for solving such problems is based on heuristic search with pattern database heuristics. In this thesis, we present a domain specific solver for the TopSpin Puzzle problem. This solver is based on the above-mentioned pattern database approach. We investigate several pattern databases, and evaluate them on problem instances of different size.

Merge-and-shrink abstractions are a popular approach to generate abstraction heuristics for planning. The computation of merge-and-shrink abstractions relies on a merging and a shrinking strategy. A recently investigated shrinking strategy is based on using bisimulations. Bisimulations are guaranteed to produce perfect heuristics. In this thesis, we investigate an efficient algorithm proposed by Dovier et al. for computing coarsest bisimulations. The algorithm, however, cannot directly be applied to planning and needs some adjustments. We show how this algorithm can be reduced to work with planning problems. In particular, we show how an edge labelled state space can be translated to a state labelled one and what other changes are necessary for the algorithm to be usable for planning problems. This includes a custom data structure to fulfil all requirements to meet the worst case complexity. Furthermore, the implementation will be evaluated on planning problems from the International Planning Competitions. We will see that the resulting algorithm can often not compete with the currently implemented algorithm in Fast Downward. We discuss the reasons why this is the case and propose possible solutions to resolve this issue.

In order to understand an algorithm, it is always helpful to have a visualization that shows step for step what the algorithm is doing. Under this presumption this Bachelor project will explain and visualize two AI techniques, Constraint Satisfaction Processing and SAT Backbones, using the game Gnomine as an example.

CSP techniques build up a network of constraints and infer information by propagating through a single or several constraints at a time, reducing the domain of the variables in the constraint(s). SAT Backbone Computations find literals in a propositional formula, which are true in every model of the given formula.

By showing how to apply these algorithms on the problem of solving a Gnomine game I hope to give a better insight on the nature of how the chosen algorithms work.

Planning as heuristic search is a powerful approach to solve domain-independent planning problems. An important class of heuristics is based on abstractions of the original planning task. However, abstraction heuristics usually come with loss in precision. The contribution of this thesis is the investigation of constrained abstraction heuristics in general, and the application of this concept to pattern database and merge and shrink abstractions in particular. The idea is to use a subclass of mutexes which represent sets of variable-value-pairs so that only one of these pairs can be true at any given time, to regain some of the precision which is lost in the abstraction without increasing its size. By removing states and operators in the abstraction which conflict with such a mutex, the abstraction is refined and hence, the corresponding abstraction heuristic can get more informed. We have implemented the refinements of these heuristics in the Fast Downward planner and evaluated the different approaches using standard IPC benchmarks. The results show that the concept of constrained abstraction heuristics can improve planning as heuristic search in terms of time and coverage.

A permutation problem considers the task where an initial order of objects (ie, an initial mapping of objects to locations) must be reordered into a given goal order by using permutation operators. Permutation operators are 1:1 mappings of the objects from their locations to (possibly other) locations. An example for permutation problems are the wellknown Rubik's Cube and TopSpin Puzzle. Permutation problems have been a research area for a while, and several methods for solving such problems have been proposed in the last two centuries. Most of these methods focused on finding optimal solutions, causing an exponential runtime in the worst case.

In this work, we consider an algorithm for solving permutation problems that has been originally proposed by M. Furst, J. Hopcroft and E. Luks in 1980. This algorithm has been introduced on a theoretical level within a proof for "Testing Membership and Determining the Order of a Group", but has not been implemented and evaluated on practical problems so far. In contrast to the other abovementioned solving algorithms, it only finds suboptimal solutions, but is guaranteed to run in polynomial time. The basic idea is to iteratively reach subgoals, and then to let them fix when we go further to reach the next goals. We have implemented this algorithm and evaluated it on different models, as the Pancake Problem and the TopSpin Puzzle .

Pattern databases (Culberson & Schaeffer, 1998) or PDBs, have been proven very effective in creating admissible Heuristics for single-agent search, such as the A*-algorithm. Haslum et. al proposed, a hill-climbing algorithm can be used to construct the PDBs, using the canonical heuristic. A different approach would be to change action-costs in the pattern-related abstractions, in order to obtain the admissible heuristic. This the so called Cost-Partitioning.

The aim of this project was to implement a cost-partitioning inside the hill-climbing algorithm by Haslum, and compare the results with the standard way which uses the canonical heuristic.

UCT ("upper confidence bounds applied to trees") is a state-of-the-art algorithm for acting under uncertainty, e.g. in probabilistic environments. In the last years it has been very successfully applied in numerous contexts, including two-player board games like Go and Mancala and stochastic single-agent optimization problems such as path planning under uncertainty and probabilistic action planning.

In this project the UCT algorithm was implemented, adapted and evaluated for the classical arcade game "Ms Pac-Man". The thesis introduces Ms Pac-Man and the UCT algorithm, discusses some critical design decisions for developing a strong UCT-based algorithm for playing Ms Pac-Man, and experimentally evaluates the implementation.

Programs submenu

Regions submenu, topics submenu, africa's oil economies amidst the energy transition: nigeria, china's rise, russia's invasion, and america's struggle to defend the west: a conversation with david sanger, the axis of upheaval: capital cable #95, adopting ai in the workplace: the eeoc's approach to governance.

  • Abshire-Inamori Leadership Academy
  • Aerospace Security Project
  • Africa Program
  • Americas Program
  • Arleigh A. Burke Chair in Strategy
  • Asia Maritime Transparency Initiative
  • Asia Program
  • Australia Chair
  • Brzezinski Chair in Global Security and Geostrategy
  • Brzezinski Institute on Geostrategy
  • Chair in U.S.-India Policy Studies
  • China Power Project
  • Chinese Business and Economics
  • Defending Democratic Institutions
  • Defense-Industrial Initiatives Group
  • Defense 360
  • Defense Budget Analysis
  • Diversity and Leadership in International Affairs Project
  • Economics Program
  • Emeritus Chair in Strategy
  • Energy Security and Climate Change Program
  • Europe, Russia, and Eurasia Program
  • Freeman Chair in China Studies
  • Futures Lab
  • Geoeconomic Council of Advisers
  • Global Food and Water Security Program
  • Global Health Policy Center
  • Hess Center for New Frontiers
  • Human Rights Initiative
  • Humanitarian Agenda
  • Intelligence, National Security, and Technology Program
  • International Security Program
  • Japan Chair
  • Kissinger Chair
  • Korea Chair
  • Langone Chair in American Leadership
  • Middle East Program
  • Missile Defense Project
  • Project on Critical Minerals Security
  • Project on Fragility and Mobility
  • Project on Nuclear Issues
  • Project on Prosperity and Development
  • Project on Trade and Technology
  • Renewing American Innovation Project
  • Scholl Chair in International Business
  • Smart Women, Smart Power
  • Southeast Asia Program
  • Stephenson Ocean Security Project
  • Strategic Technologies Program
  • Transnational Threats Project
  • Wadhwani Center for AI and Advanced Technologies
  • All Regions
  • Australia, New Zealand & Pacific
  • Middle East
  • Russia and Eurasia
  • American Innovation
  • Civic Education
  • Climate Change
  • Cybersecurity
  • Defense Budget and Acquisition
  • Defense and Security
  • Energy and Sustainability
  • Food Security
  • Gender and International Security
  • Geopolitics
  • Global Health
  • Human Rights
  • Humanitarian Assistance
  • Intelligence
  • International Development
  • Maritime Issues and Oceans
  • Missile Defense
  • Nuclear Issues
  • Transnational Threats
  • Water Security

The Intelligence Edge: Opportunities and Challenges from Emerging Technologies for U.S. Intelligence

Photo: Adobe Stock

Photo: Adobe Stock

Table of Contents

Brief by Brian Katz

Published April 17, 2020

Available Downloads

  • Download the CSIS Brief 1302kb

CSIS Briefs

  • Emerging technologies such as artificial intelligence have the potential to transform and empower the U.S. Intelligence Community (IC) while simultaneously presenting unprecedented challenges from technologically capable adversaries.
  • These technologies can help expand, automate, and sharpen the collection and processing of intelligence , augment analysts’ ability to craft strategic and value-added analysis and insights, and enable the IC to better time, tailor, and target intelligence products for key decisionmakers.
  • U.S. rivals and adversaries are also moving swiftly to develop, field, and integrate these technologies into intelligence operations against the United States. In addition to competing with state rivals, the U.S. IC also must overcome its own bureaucratic, technical, and organizational hurdles to adopting and assimilating new technologies.
  • The CSIS Technology and Intelligence Task Force will work to identify near-term opportunities to integrate advanced technologies into the production of strategic intelligence and craft an action plan to overcome obstacles and implement change.

INTRODUCTION

Maintaining a competitive advantage in strategic intelligence over increasingly sophisticated rivals and adversaries will be a critical component of ensuring and advancing U.S. national security interests in the coming decades. Central to success in the intelligence realm will be the adoption and assimilation of emerging technologies into the way intelligence is collected, analyzed, and delivered to decisionmakers. If intelligence is about providing timely, relevant, and accurate insight into foreign actors to provide U.S. leaders an advantage in formulating policy, then many new technologies hold the potential to unlock deeper and wider data-driven insights and deliver them at greater speed, scale, and specificity for consumers. These same technologies, however, will also transform the intelligence capabilities of rivals such as China and Russia and could disrupt the very fundamentals of U.S. intelligence. 1 In competition with such rivals, emerging technologies and their application to intelligence missions will be a primary and critical battlefield.

The CSIS Technology and Intelligence Task Force has embarked on a year-long study to understand how technologies such as such as artificial intelligence(AI) I and its subset, machine learning II (ML), cloud computing, and advanced sensors, among others, can empower intelligence and the performance of the intelligence community (IC). The task force will explore how emerging technologies can be applied and integrated into the IC’s day-to-day operations and how the IC must adapt to maintain their intelligence edge.

This CSIS Brief provides a strategic framework for the task force’s efforts. It begins with a snapshot of the potential opportunities emerging technologies present across the intelligence cycle, to be explored in greater depth during the project year. It then outlines the risks and challenges such technologies will pose to the IC. The brief concludes by presenting the core intelligence questions that will drive the task force’s inquiry. The focus of this framing brief and the CSIS task force is strategic intelligence, that is, nation- al-level intelligence intended for senior-level policymakers and national security officials.

BREATHTAKING TECHNOLOGICAL CHANGE CREATES OPPORTUNITIES FOR U.S. INTELLIGENCE

Emerging technologies are already reshaping how the IC gathers, stores, and processes information but will likely transform all core aspects of the intelligence cycle in the coming decades—from collection to analysis to dissemination. Driving this change is the convergence of four technological trends: proliferation of networked, multimodal sensors; massive growth in “big data,” both classified and unclassified; improvements in AI algorithms and applications particularly suited to intelligence, such as computer vision and natural language processing; and exponential growth in computing power to process data and power AI systems. III,2

As the U.S. private sector drives these advances, the IC’s ability to combine commercial AI applications with IC-unique data and systems creates unprecedented opportunities for improving how the IC collects, processes, and derives meaning from data and delivers actionable insights to policymakers. 3 However, as we consider the opportunities presented by emerging technologies like AI, it is also important to understand that these technologies are neither silver bullets to intelligence tasks and problems, nor independent from a much broader technology and human capital ecosystem.

Emerging technologies are already reshaping how the IC gathers, stores, and processes information but will likely transform all core aspects of the intelligence cycle in the coming decades—from collection to analysis to dissemination.

COLLECTION: ENABLING THE “INTS”

In a world of proliferating sensors and exponential growth in data and computing, AI can help enable intelligence collection organizations in automating and simplifying the processing of collected data and in identifying and prioritizing collection targets across the various “-INTs”—geospatial (GEOINT), signals (SIGINT), human (HUMINT), and open-source (OSINT). AI applications can then assist analysts in how they receive, visualize, and exploit that data to discern insights for policymakers.

  • Technical Collectors: AI is particularly well-suited for more technical means of collection such as SIGINT and GEOINT, helping process and analyze their massive pools of sensor-derived data. 4 For GEOINT, AI capabilities such as computer vision IV can help automate the processing of reams of imagery data and perform critical, time-intensive tasks, such as image recognition and categorization at speed and scale. 5 For SIGINT, AI can be similarly useful in automating the processing V of electronic signals data (ELINT), while speech-to-text translation/transcription and other natural language processing capabilities help decipher intercepted communications (COMINT). 6
  • Human Operators: In addition to the technical “-INTs,” AI tools can also enable the on-the-ground human operator in the most core HUMINT mission: recruiting and deriving intelligence from foreign agents. 7 AI algorithms could be trained to help “spot and assess” potential sources by combing open-source data. Advanced analytics could then help construct “digital patterns-of-life” of these recruitment targets, assisting in predicting their activities and verifying their access to desired information. 8 These tools could then be used to monitor for security and counterintelligence risks before and after recruitment. 9
  • Commercial Partners: While enabling aspects of classified intelligence collection, emerging technologies will also transform open-source intelligence (OSINT), providing the IC high-quality data streams and freeing up “exquisite” collection platforms for harder intelligence targets. The commercialization of space and proliferation of satellite-based sensors will dramatically improve the coverage and quality of commercial imagery and some signals collection. 10 The availability of big data and OSINT-derived analytics on global security, political, and economic trends can also help alleviate the collection burden on the IC’s small HUMINT cadre and allow them to focus on collecting truly secret information.
While enabling aspects of classified intelligence collection, emerging technologies will also transform open- source intelligence (OSINT), providing the IC high-quality data streams and freeing up “exquisite” collection platforms for harder intelligence targets.

ANALYSIS: CREATING MORE “STRATEGIC BANDWIDTH”

Emerging technologies can also transform and augment how analysts make sense of ever-growing data and team with machines to deliver timely insights to decisionmakers. “The future of analysis,” CIA’s former Chief Learning Officer Joseph Gartin writes, “will be shaped by the powerful and potentially disruptive effects of AI, big data, and machine learning on what has long been an intimately scaled human endeavor.” 11 Disruption can be a positive for analysts and the way analysis is generated. Analysts could harness AI to more efficiently find and filter evidence, sharpen and test their judgements with machine-derived ones, and automate simple and necessary but time-absorbing tasks. The result could be an analytic cadre with more strategic bandwidth and better able to exploit what will remain their “intimately human” advantages in applying context, historic knowledge, and subject matter expertise to identifying implications and opportunities for policymakers. 12

  • Smarter Search, Fusion, and Data Visualization: Analysis starts with the search for relevant reporting and data across the “INTs.” Analysts should be able to leverage AI, including deep learning, VI to help sift through reporting streams to identify and visualize patterns, trends, and threats and integrate them into their analysis. 13 With AI, strategic analysts and data scientists could partner to hone smarter queries and search algorithms for a given intelligence question, casting wider, more creative, and more efficient nets across datasets to piece together critical but often non-explicit information (e.g., “what is adversary X’s strategy for Y?”) 14 .
  • Testing Analytic Lines: As intelligence professionals build their analytic lines, assembling key evidence derived from the “INTs” and forming initial judgments, data analytics can be leveraged to test those initial findings against big data and machine-derived results. 15 Corroboration can strengthen analytic lines, while conflicting findings can push analysts to revisit their evidence and assumptions. Machine knowledge and judgment of past analytic lines, source quality, and competing hypotheses can add rigor to the process, helping analysts confront bias, avoid groupthink, think critically, and be transparent about their levels of confidence. 16
  • Offloading Analytic Tasks: In addition to providing inputs for analysis, AI tools can also perform certain types of analysis, enabling analysts to offload more tactical or time-intensive tasks onto machines. Even today, all-source analysts are still called upon to craft daily intelligence products monitoring crises and summarizing geopolitical events when AI can cull the same data—often primarily open-source—and generate written summaries. 17 Machines could also supplement, aggregate, or substitute for analysts in areas where the IC has a mixed tracked record and unclear comparative advantage, such as predictive analysis and long-range forecasting. 18
Analysts should be able to leverage AI, including deep learning, to help sift through reporting streams to identify and visualize patterns, trends, and threats and integrate them into their analysis.

DISSEMINATION: FASTER, SMARTER SHARING AND DELIVERY

Emerging technologies can help transform not only the crafting of intelligence but also how it is delivered to decisionmakers—at the time, place, and level needed to have impact and stay ahead of the decision curve. 19 As the AIM Initiative notes, cloud computing V11 and IC digital infrastructure have “paved the road to harness the power of unique data collections and insights to provide decision advantage at machine speed.” 20 Beyond product dissemination, cloud and AI tools can help transform how intelligence is shared and delivered more broadly—between analysts, organizations, and allies—to distribute vital knowledge and inform decisionmaking. 21

  • Customization: As cloud and AI are distributed and used across IC and policymaking organizations, analysts should be able to better time, tailor, and target products to diverse sets of consumers according to their unique intelligence needs. 22 Much like AI can help analysts process and prioritize relevant data, these tools could help consumers prioritize which intelligence products they receive, customizing their daily “readbooks” to serve their current policy and operational needs. Global cloud capabilities could also help analysts deliver customized intelligence to more decisionmakers—military, diplomatic, and intelligence operators—in more places around the world, unlocking new customers for their products. 23
  • Collaboration: In addition to delivering finished intelligence products, cloud and AI can enable analysts to collaborate more efficiently and effectively across geographic locations in generating those products. 24 Analysts could leverage common or accessible

data architectures to share data sets, train and test algorithms, and jointly employ AI tools to generate insights, convey knowledge, and coordinate analytic lines across more diverse sets of analysts. 25 Cloud- enabled collaboration could strengthen analytic findings, build shared missions, and provide consistent feedback to collectors, even those operating on the edge. 26

  • Sharing: Cloud and AI could also be leveraged to improve intelligence and information sharing with customers, consumers, and constituents outside the IC. Within the U.S. government, multi-layer fabrics and cloud architectures could enable the IC to more easily and securely share information with policy, military, and law enforcement organizations at differing classification levels. 27 Outside government, cloud and data sanitization tools could assist the IC in sharing sensitive but unclassified information with the private sector on matters of vital importance, such as cyber threats to critical infrastructure and disinformation campaigns on social media platforms. 28 Outside the United States, cloud and AI can also improve how intelligence is shared and jointly developed over time with U.S. allies and partners. 29
Beyond product dissemination, cloud and AI tools can help transform how intelligence is shared and delivered more broadly—between analysts, organizations, and allies—to distribute vital knowledge and inform decisionmaking.

EMERGING TECHNOLOGIES WILL DISRUPT AND CHALLENGE THE FOUNDATIONS OF U.S. INTELLIGENCE

While the benefits of emerging technologies could be immense for American intelligence, their development, of course, will not occur in a geopolitical vacuum. U.S. rivals, namely China, but also Russia, are moving swiftly to develop, field, and integrate similar AI and associated technologies into intelligence operations. The challenge to U.S. intelligence, however, will come not only from U.S. adversaries but from the IC itself, as organizational, bureaucratic, and technical hurdles slow technological adoption. Further challenges will come from the competition of the private sector and the increasing quality of open-source intelligence, which may be just as—or more—timely, relevant, and accurate than what the classified intelligence world generates.

STRATEGIC: INTELLIGENT INTELLIGENCE WARFARE

As the international race for dominance in AI accelerates, battlefield advantage, the AI National Security Commission notes, “will shift to those with superior data, connectivity, compute power, algorithms, and overall system security.” 30 That battlefield will extend beyond the military realm and into to the intelligence one as AI and associated technologies permeate intelligence operations. In the evolution to “intelligentized” warfare, as Chinese military strategists describe it, China, Russia, and other rivals will enjoy a structural advantage: unity of civilian-military effort in developing and employing AI technologies. 31 This resource advantage will be exploited to strengthen their defenses against U.S. intelligence operations and enable more targeted and aggressive offensive operations.

  • Faster to the Fight: China is betting that its whole-of- nation strategy for AI development, fusion of military and civilian spheres, and “techno-utilitarian political culture,” as Kai-Fu Lee writes, “will pave the way for faster deployment of game-changing technologies,” providing a distinct advantage in fielding these technologies for intelligence missions at speed and scale. 32 China, Russia, and other authoritarian states’ ability to synthesize civilian and military AI R&D and steer commercial sector innovation to military and intelligence applications enable them to pool national resources and know-how and potentially adapt technology more quickly to changing operational environments.33 China’s continuing advances in 5G and internet-of-things will enable even faster distribution and use of AI-enabled intelligence tools, for both defense and offense. 34
  • Stronger Defense: AI-enabled intelligence tools will assist China, Russia, and other U.S. rivals seeking to disrupt, deny, and deceive U.S. intelligence collection. A world of “ubiquitous surveillance” due to advances in AI surveillance and biometric tools will create more denied areas for HUMINT operations, persistent risk of exposure, and the need to change or discard decades of well-honed tradecraft. 35 AI- enabled advances in cybersecurity and cryptography and, in the future, quantum computing, could enable adversaries to harden and encrypt their systems to deny penetration of and collection on their networks. 36 Deception techniques to fool algorithms into misclassifying data and use of generative adversarial networks VIII to create “deepfakes” of imagery, communications, and intelligence reports could sow confusion among U.S. analysts, leading to poor analysis and misinformed policy and operational decisions. 37
  • Aggressive Offense: AI tools will likely also be exploited to penetrate, manipulate, and weaken U.S. collection and analytic capabilities. AI-accelerated cyberattacks could target collection and communication platforms and employ intelligent malware to access, exploit, or destroy critical data and intelligence.38 Once inside, foreign intelligence could exploit “counter- AI” techniques to insert “poisoned” or false data into training sets to fool U.S. IC algorithms and cause AI systems to misperform, such as a deep neural network image classifier falsely recognizing friend as foe. 39 In addition, AI-enabled disinformation campaigns will enable adversaries to propagate false information at unprecedented scale and seeming authenticity, sowing confusion for analysts and policymakers attempting to make sense of and take action on information. 40
AI-enabled intelligence tools will assist China, Russia, and other U.S. rivals seeking to disrupt, deny, and deceive U.S. intelligence collection. . . . AI tools will likely also be exploited to penetrate, manipulate, and weaken U.S. collection and analytic capabilities.

ORGANIZATIONAL: BUREAUCRACY—AND SECURITY—DOES ITS THING

The coming decade will provide no shortage of tech-enabled opportunities to advance U.S. intelligence, but organizational and bureaucratic barriers and the security and technical realities of intelligence and data architecture will likely hinder the IC’s ability to exploit them. Cutting-edge technology might exist for a given intelligence mission, but it could be outdated and surpassed by U.S. rivals by the time it is actually acquired and integrated. Quality data might exist but cannot be turned into insights and action if it cannot be shared or accessed by analysts. And even if data can be shared, analysts might not trust it or related findings derived by machines.

  • Procurement and Adaptation: The IC’s technology procurement timelines tend to be in years, while the cycle of innovation in the commercial sector renders those technologies outdated in months. The IC’s lengthy research, development, testing, and evaluation timeline reflects its unique needs, risks, and security requirements but will hinder its ability to acquire, integrate, and assimilate AI technologies at speed—let alone at scale and cross enterprises. 41 Moreover, procurement and contracting practices will also make it difficult to adapt acquired AI technologies and restructure key tasks, such as retraining ML algorithms, to shifting intelligence needs and operational environments. 42
  • Stovepipes and Silos : AI tools need access to training and validation data sets across all INTs to be useful for all-source analysts, but vital data often remains hidden in silos buried across IC organizations or on inaccessible data architecture that prevents sharing. 43 Even if data can be accessed and shared, most useful AI methods for intelligence applications require large, quality, and consistently tagged data sets, but differing labeling standards and practices across and even within agencies means analysts still have to do much of the time-intensive processing and collating work. 44 The challenges of data access, architectures, access, and formatting are only further exacerbated when working with foreign partners. 45
  • Trust, Authentication, and Explainability: Despite the scale of classified collection, the IC will likely still need access to commercially derived data to have sufficient volume to train and power AI applications. But unlike the commercial sector, IC analysts and data scientists cannot easily turn to open or crowdsourcing platforms for labeled data nor will they necessarily trust the accuracy and authenticity of either, particularly as China and Russia launch more aggressive adversarial AI efforts. 46 Analysis depends on clear explanations and reasoning for the logic, evidence, assumptions, and inferences used to reach conclusions. Machine- generated analyses derived from blackbox algorithms will be unusable if analysts are unable to understand the logic and processes behind the conclusions and the conditions under which they are valid. 47

PEOPLE AND MISSIONS: SHIFTING—OR EVAPORATING INTELLIGENCE MISSIONS

Along with external threats and internal obstacles, the IC will face a more fundamental, even existential, challenge from rapid technological advances: if commercialization of the intelligence playing field means the information and tools once exclusively the domain of government are made widely available, what will be the IC’s purpose and missions? Technological transformation will also force the IC to more clearly and demonstrably justify its cost and value-added—to policymakers, to Congress, and to the American people—in an environment of growing skepticism, misinformation, and public assaults on the IC’s integrity. Challenges to core missions will be felt by individual professionals, within specific IC organizations, and across the intelligence community writ large.

  • Organizations: U.S. intelligence collection and analysis organizations have been designed around specific “INT” and analytic missions, building unique expertise and cultures over the decades. The blending of intelligence missions through the nature of AI and technological advances (e.g., HUMINT operators using their own SIGINT and GEOINT tools, or AI SIGINT processing tools also generating analysis) could render such task organization irrelevant or ill-suited to future missions. Moreover, competition for intelligence missions with equal or superior commercial products could render entire IC organizations irrelevant, redundant, or even obsolete.
  • Personnel: Within intelligence organizations are intelligence professionals; in an AI-augmented workplace, who will be recruited and attracted to join the IC? Alternatively, how will non-tech savvy career officers be retrained and retooled to succeed? 48 Will case officers and political analysts who spent a decade studying Arabic, the Middle East, and intelligence tradecraft also need to learn how to code? The fundamentals of what an intelligence professional is and does is likely to change dramatically. Current officers will be required to prepare for a tech-driven future while still mastering present day missions and tasks.
  • The IC Itself: The proliferation and increasing quality of AI-enabled open-source collection and data analytics tools means that quality analysis of global events and specific threats can be generated for U.S. policymakers at a fraction of the IC’s cost. And while “exquisite” intelligence platforms will still be needed to collect on hard targets and true secrets, the persistent risk of hacks, cyberattacks, and leaks means these expensive tools can be more easily stolen, denied, and rendered inoperable by U.S. adversaries, negating their value. 49
Along with external threats and internal obstacles, the IC will face a more fundamental, even existential, challenge from rapid technological advances.

THE WAY FORWARD: OUR CORE INTELLIGENCE QUESTION

While the risks and challenges of emerging technologies to U.S. intelligence are formidable, the opportunities to harness them will likely be even greater. In the months ahead, the CSIS Technology and Intelligence Task Force will be focused on identifying those opportunities and the policy, legislative, organizational, and technical changes that must occur to effectively seize them. Our core objective is to generate actionable recommendations to help the U.S. IC remain the global gold standard in crafting and delivering strategic intelligence that provides policymakers advantages over U.S. adversaries. The central “intelligence question” driving the task force’s research will be:

What are the near-term opportunities to integrate advanced technologies into the production of strategic intelligence, and how can the obstacles to doing so be overcome?

The key sub-questions the task force will explore include:

  • Which emerging technologies could be most relevant and impactful across and within each means of collection (e.g., SIGINT, GEOINT, and HUMINT)?
  • How can “analyst-machine” performance be optimized to maximize data intake, streamline processing, prioritize relevant information, and create more bandwidth for analysts to think and write strategically?
  • How can emerging tech such as AI and cloud computing be used to improve collaboration, coordination, and delivery of intelligence products to policy, intel, military, and allied customers?
  • What is the right model for deploying data scientists and technologists into all-source analysis environments, and what skills should or must strategic analysts develop?
  • Where can the IC smartly focus its technology investments, and what is best to leave to the commercial sector? How can the IC be more agile in acquiring and assimilating them?
  • What are the implications of success or failure in incorporating emerging technologies into the U.S. intelligence enterprise for U.S. national security vis- à-vis global competitors?

Our working hypothesis is twofold. First, emerging technologies hold incredible potential to augment, improve, and transform the collection, analysis, and delivery of intelligence but could require fundamental changes to the types of people, processes, and organizations conducting the work. Calls for entire new entities or other “org chart” solutions, however, will not solve the problem, nor ensure technological advances are actually being integrated at the working-level to truly augment performance. Second, while ML and commercial applications will make some IC tasks and personnel unnecessary or obsolete, the unique skills and expertise of IC professionals remain a distinct U.S. advantage, able to generate intelligence unrivalled in insight, context, and foresight.

Thus, the IC and its critical supporting elements— policymakers, Congress, the technology and industrial sectors, and the research community—should focus on developing and integrating technologies that best enable and augment the IC’s value-added: collecting vital and truly secret intelligence and crafting datadriven, context-rich, and forward-looking analysis that is consistently higher on the policymaker value chain than that of its rivals. 50 Failing to do so risks a reactive U.S. national security policy apparatus that is consistently unable to advance the nation’s strategic interests in the face of determined adversaries.

Brian Katz is a fellow in the International Security Program at the Center for Strategic and International Studies (CSIS) and research director of the CSIS Technology and Intelligence Task Force.

This report is made possible by support to the CSIS Technology and Intelligence Task Force from Booz Allen Hamilton, Rebellion Defense, Redhorse, and TRSS.

CSIS Briefs are produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2020 by the Center for Strategic and International Studies. All rights reserved.

Please consult the PDF for references.

Programs & Projects

'Artificial Intelligence as a Driving Force for the Economy and Society' is a key theme at the World Economic Forum's Annual Meeting.

.chakra .wef-1t4fkg7{margin-top:16px;margin-bottom:16px;line-height:normal;color:#ffffff;display:block;background:#000000;margin:0px;font-size:2.5rem;padding-left:16px;padding-right:16px;padding-bottom:12px;border-radius:0.25rem;border-top-left-radius:0;border-bottom-left-radius:0;-webkit-box-decoration-break:clone;-webkit-box-decoration-break:clone;box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-1t4fkg7{display:inline;}}@media screen and (min-width:56.5rem){.chakra .wef-1t4fkg7{font-size:4rem;}} AI - artificial intelligence - at Davos 2024: What to know

'Artificial Intelligence as a Driving Force for the Economy and Society' is a key theme at the World Economic Forum's Annual Meeting. Image: Unsplash/Damian Markutt

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Robin Pomeroy

  • 'Artificial Intelligence as a Driving Force for the Economy and Society' is a key theme at the World Economic Forum's Annual Meeting.
  • Advances in technology have the potential to help us solve global challenges, but innovation and guardrails are essential.
  • Read about some of the key sessions, reports and initiatives at Davos 2024 on AI, innovation and technology.
  • Check back here for regular updates throughout the week and use the navigation bar on the right to catch up on what you've missed .

If you’d never considered artificial intelligence's impact on your life, 2023 was probably the year that changed that.

From jobs to skills, and regulations and governance, AI permeated conversations like never before.

The impact it will have on jobs is on the radar of the International Monetary Fund (IMF) which has just released its Staff Discussion Note Gen-AI: Artificial Intelligence and the Future of Work .

It finds almost 40% of employment globally is exposed to AI, which rises to 60% in advanced economies. Among workers, those that are college-educated and women are more exposed to AI, but also more likely to reap the benefits, while strong productivity gains could boost growth and wages.

Countries around the world have been exploring regulation and governance around AI, including the European Union, where a draft deal on AI rules was agreed in December.

We also held our own AI Governance Summit , in response to rising concerns about the technology’s impact, released a set of recommendations , and explored the impact of AI and large language models on jobs .

As we look ahead to 2024 at Davos , AI as a driving force is one of our four key themes. Below, we’ll keep you up to date on what to watch, read and look out for.

Live updates on key AI sessions

Dive into the key quotes, tweets and YouTube clips from Davos sessions on AI.

What to know from Day 2

  • Generative AI: Steam Engine of the Fourth Industrial Revolution?

Speakers from government and business discussed the implications of generative AI following its rapid emergence in 2023, and how we can manage the risks.

But there was also a strong focus on how much it could boost productivity and its possible applications, with Senator Mike Rounds, from South Dakota, US, believing it can transform healthcare.

  • Finnovation

Business leaders discussed how to ensure the benefits of AI outweigh the risks in fin-tech.

  • The Expanding Universe of Generative Models

Gen AI is advancing rapidly, but what is the latest research and development in the field and what future opportunities will the technology offer?

"AI can solve really hard, aspirational problems, that people maybe are not capable of solving" such as health, agriculture and climate change, said Daphne Koller , Founder and CEO at Insitro Inc.

"We're not done with scaling [LLMs], we still need to push up," said Aiden Gomez , Co-founder and CEO of Cohere Inc.

  • A Conversation with Satya Nadella

technology intelligence thesis

Microsoft’s CEO on AI and limiting ‘unintended consequences’

The Forum's Founder and Executive Chairman, Klaus Schwab had his annual fireside with the Microsoft CEO, which touched on balancing the risks and "unintended consequences" against the benefits of generative AI.

“The biggest lesson of history is… not to be so much in awe of some technology that we sort of feel that we cannot control it, we cannot use it for the betterment of our people.”

  • AI: The Great Equaliser?

We need to bridge the gap between AI's potential and its practical application. How can we ensure equal access to the technology?

"AI will not rescue the SDGs," said Amandeep Singh Gill , the UN Secretary-General's Envoy on Technology.

Rwanda's Minister of Information Communication Technology and Innovation, Paula Ingabire , said AI was more of an opportunity than a challenge for the Global South but digital literacy and the cost of devices need to be addressed.

What to know from Day 3

  • Thinking through Augmentation

Much of the potential of AI hinges on its use in the workplace. This session brought together the chief executives of Deloitte, Sanofi, L'Oréal, and Exponential View, to explore the most likely scenarios for jobs and productivity.

Job function groups with the highest exposure (auotmation and augmention)

  • 360° on AI Regulations

Microsoft President Brad Smith joined Arati Prabhakar, the Director of the White House Office of Science and Technology Policy, Vera Jourová , Vice-President for Values and Transparency at the European Commission, and Josephine Teo , Singapore's Minister for Communications and Information, to discuss the future of AI governance.

There are diverse approaches to regulating AI, from the US, EU and multi-nationally to date, but Brad Smith said he expects more convergence in the future.

"We won't have a world without divergence, but people actually care about a lot of the same things and actually have similar approaches to addressing them."

Jourová said AI promises "a lot of fantastic benefits for people".

The regulation is the precondition to cover the risks, but the rest remains to be free for creativity and positive thinking - and in Europe we are well placed.

  • Ethics in the Age of AI

Philosopher Michael Sandel explored the ethical questions AI poses, beyond jobs, fairness, privacy and democracy to whether technology would affect what it means to be human.

If we can digitally de-age the actor Harrison Ford in the latest Indiana Jones movie, it is OK to bring back actors such as Humphrey Bogart from the dead?

Sandel showed the audience a video interview of him and director and actor Michael B Jordan discussing casting deceased actors.

It boiled down, he said, to a deep human value of authenticity and presence.

He concluded: "Will new technologies lead us, or are they already leading us and our children to confuse virtual communities and human connection for the real thing? Because if they do, then we may lose something precious about what it means to be human."

What to know from Day 4

  • Education Meets AI

AI has the potential to change education and the way we learn. Emilija Stojmenova Duh , Slovenia's Minister of Digital Transformation, joined UAE Minister of Education, Ahmad bin Abdullah Humaid Belhoul Al Falasi , Hadi Partovi , Founder and CEO, Code.org, and Jeffrey Tarr , CEO of Skillsoft to explore how we can adapt and adjust to take advantage.

Partovi said when people think about job losses due to AI, the risk isn't people losing their job to AI.

"It's losing their job to somebody else who knows how to use AI. That is going to be a much greater displacement. It's not that the worker gets replaced by just a robot or a machine in most cases, especially for desk jobs, it's that some better educated or more modernly educated worker can do that job because they can be twice as productive or three times as productive."

The imperative is to teach how AI tools work to every citizen, and especially to our young people.

technology intelligence thesis

Will copyright law enable or inhibit generative AI?

  • Gen AI: Boon or Bane for Creativity?

Generative AI presents a future where creativity and technology are more closely linked than ever before.

Neal Mohan , Chief Executive Officer of YouTube, joined Daren Tang , Director-General, World Intellectual Property Organization (WIPO), Almar Latour , CEO; Publisher, Wall Street Journal, Dow Jones & Company, and Contemporary Artist, Krista Kim , to explore whether prompts should be copyrighted and how we distinguish what is made by humans from machines.

We need to bring all these actors together to talk and share best practice. We will need some sort of interoperability - that's where the world is heading.

  • Technology in a Turbulent World

technology intelligence thesis

Davos 2024: Sam Altman on the future of AI

As technology plays an ever bigger role in our daily lives, questions of safety, trust and human interaction become increasingly important.

In a key and highly anticipated Davos session, OpenAI CEO Sam Altman joined Marc Benioff , Chair and CEO of Salesforce, Julie Sweet , Chair and CEO of Accenture, Jeremy Hunt , UK Chancellor of the Exchequer and Albert Bourla , CEO of Pfizer, to discuss these issues.

  • Hard Power of AI

From diplomacy to defence, AI is markedly changing geopolitics. Shifts in data ownership and infrastructure will transform some stakeholders while elevating others, reshaping sovereignty and influence.

Leo Varadkar , Taoiseach of Ireland, Dmytro Kuleba , Ukraine's Minister of Foreign Affairs, Karoline Edtstadler , Austria's Federal Minister for the EU and Constitution, Nick Clegg , President of Global Affairs at Meta Platforms and Mustafa Suleyman , Co-Founder and CEO, Inflection AI, explore how the landscape is evolving and what it means for the existing international architecture.

Clegg highlighted the importance of the political, societal and ethical debate happening "in parallel" as the technology is evolving.

Varadkar said AI had huge potential benefits for the future.

As a technology, I think it is going to be transformative. I think it's going to change our world as much as the internet has - and maybe even the printing press.

Current risk landscape

Reports to read on AI and technology

technology intelligence thesis

Global Cybersecurity Outlook 2024

The latest Global Cybersecurity Outlook warns about the threat to cyber resilient from emerging technologies, such as generative AI.

Global Lighthouse Network: Adopting AI at Speed and Scale

This whitepaper explores the impact of machine learning on manufacturing through the lens of the Global Lighthouse Network’s 153 Lighthouses.

Jobs of Tomorrow: Large Language Models and Jobs – A Business Toolkit

How can businesses respond to the changes brought about by large language models on jobs? This white paper, produced in collaboration with Accenture, offers a toolkit for businesses to help their workforces reskill, adapt and take advantage of the potential of the technology.

technology intelligence thesis

AI Governance Alliance: Briefing Paper Series

Views from the Manufacturing Front Line: Workers’ Insights on How to Introduce New Technology

Technology is evolving rapidly, and companies, particularly in the manufacturing sector must master the art of introducing emerging technologies to the shop floor. This report, a collaboration with University of Cambridge and constituent members of the Manufacturing Workers of the Future initiative, looks at how technology can be integrated in a long-term, sustainable, human-centric and effective way.

Patient-First Health with Generative AI: Reshaping the Care Experience

How can generative AI help improve healthcare? This whitepaper explores six case studies where companies and institutions are making the promise a reality.

Initiatives and events to know about

AI Governance Alliance

The AI Governance Alliance brings together leaders from across industry, government, academia and civil society to champion responsible global design and release of transparent and inclusive AI systems.

Innovator Communities

The Forum’s Innovator Communities exist to establish relationships with the world’s leading start-ups, some of which will be tomorrow’s big players, and to engage them in the Forum’s work, sharing their insights and, importantly, solutions to global issues we're all facing. The community is comprised of 3 sub-networks: Technology Pioneers ; Global Innovators ; and Unicorns .

Related topics:

Advertisement

Supported by

Can Artificial Intelligence Make the PC Cool Again?

Microsoft, HP, Dell and others unveiled a new kind of laptop tailored to work with artificial intelligence. Analysts expect Apple to do something similar.

  • Share full article

Satya Nadella standing in front of an audience with a screen behind him that reads, “Copilot + PC” in large letters.

By Karen Weise and Brian X. Chen

Karen Weise reported from Microsoft’s headquarters in Redmond, Wash., and Brian X. Chen from San Francisco.

The race to put artificial intelligence everywhere is taking a detour through the good old laptop computer.

Microsoft on Monday introduced a new kind of computer designed for artificial intelligence. The machines, Microsoft says, will run A.I. systems on chips and other gear inside the computers so they are faster, more personal and more private.

The new computers, called Copilot+ PC, will allow people to use A.I. to make it easier to find documents and files they have worked on, emails they have read, or websites they have browsed. Their A.I. systems will also automate tasks like photo editing and language translation.

The new design will be included in Microsoft’s Surface laptops and high-end products that run on the Windows operating system offered by Acer, Asus, Dell, HP, Lenovo and Samsung, some of the largest PC makers in the world .

The A.I. PC, industry analysts believe, could reverse a longtime decline in the importance of the personal computer. For the last two decades, the demand for the fastest laptops has diminished because so much software was moved into cloud computing centers. A strong internet connection and web browser was all most people needed.

But A.I. stretches that long-distance relationship to its limits. ChatGPT and other generative A.I. tools are run in data centers stuffed with expensive and sophisticated chips that can process the largest, most advanced systems. Even the most cutting-edge chatbots take time to receive a query, process it and send back a response. It is also extremely expensive to manage.

Microsoft wants to run A.I. systems directly on a personal computer to eliminate that lag time and cut the price. Microsoft has been shrinking the size of A.I. systems, called models, to make them easier to run outside of data centers. It said more than 40 will run directly on the laptops. The smaller models are generally not as powerful or accurate as the most cutting-edge A.I. systems, but they are improving enough to be useful to the average consumer.

“We are entering a new era where computers not only understand us, but can anticipate what we want and our intents,” said Satya Nadella, Microsoft’s chief executive, at an event at its headquarters in Redmond, Wash.

Analysts expect Apple to follow suit next month at its conference for software developers, where the company will announce an overhaul for Siri , its virtual assistant, and an overall strategy for integrating more A.I. capabilities into its laptops and iPhones.

Whether the A.I. PC takes off depends on the companies’ ability to create compelling reasons for buyers to upgrade. The initial sales of these new computers, which cost more than $1,000, will be small, said Linn Huang, an analyst at IDC, which closely tracks the market. But by the end of the decade — assuming A.I. tools turn out to be useful — they will be “ubiquitous,” he predicted. “Everything will be an A.I. PC.”

The computer industry is looking for a jolt. Consumers have been upgrading their own computers less frequently, as the music and photos they once stored on their machines now often live online, on Spotify, Netflix or iCloud. Computer purchases by companies, schools and other institutions have finally stabilized after booming — and then crashing — during the pandemic.

Some high-end smartphones have already been integrating A.I. chips, but the sales have fallen short because the features “are still not sophisticated enough to catalyze a faster upgrade cycle,” Mehdi Hosseini, an analyst at Susquehanna International Group, wrote in a research note. It will be at least another year, he said, before enough meaningful breakthroughs will lead consumers to take note.

At the event, Microsoft showed new laptops with what it likened to having a photographic memory. Users can ask Copilot, Microsoft’s chatbot, to use a feature called Recall to look up a file by typing a question using natural language, such as, “Can you find me a video call I had with Joe recently where he was holding an ‘I Love New York’ coffee mug?” The computer will then immediately be able to retrieve the file containing those details because the A.I. systems are constantly scanning what the user does on the laptop.

“It remembers things that I forget,” said Matt Barlow, Microsoft’s head of marketing for Surface computers, in an interview.

Microsoft said the information used for this Recall function was stored directly on the laptop for privacy, and would not be sent back to the company’s servers or be used in training future A.I. systems. Pavan Davuluri, a Microsoft executive overseeing Windows, said that with the Recall system users would also be able to opt out of sharing certain types of information, such as visits to a specific website, but that some sensitive data, such as financial information and private browsing sessions, would not be monitored by default.

Microsoft also demonstrated live transcripts that translate in real time, which it said would be available on any video that streams across a laptop’s screen.

Microsoft last month released A.I. models small enough to run on a phone that it said performed almost as well as GPT-3.5, the much larger system that initially underpinned OpenAI’s ChatGPT chatbot when it debuted in late 2022.

(The New York Times sued OpenAI and Microsoft in December for copyright infringement of news content related to A.I. systems.)

Chipmakers have also made advances, like adjusting a laptop’s battery life to allow for the enormous number of calculations that A.I. demands. The new computers have dedicated chips built by Qualcomm, the largest chip provider for smartphones.

Though the type of chip inside the new A.I. computers, known as a neural processing unit, specializes in handling complex A.I. tasks, such as generating images and summarizing documents, the benefits may still be unnoticeable to consumers, said Subbarao Kambhampati, a professor and researcher of artificial intelligence at Arizona State University.

Most of the data processing for A.I. still has to be done on a company’s servers instead of directly on the devices, so it’s still important that people have a fast internet connection, he added.

But the neural processing chips also speed up other tasks, such as video editing or the ability to use a virtual background inside a video call, said Brad Linder, the editor of Liliputing, a blog that has covered computers for nearly two decades. So, even if people don’t buy into the hype surrounding artificial intelligence, they may end up getting an A.I. computer for other reasons.

Karen Weise writes about technology and is based in Seattle. Her coverage focuses on Amazon and Microsoft, two of the most powerful companies in America. More about Karen Weise

Brian X. Chen is the lead consumer technology writer for The Times. He reviews products and writes Tech Fix , a column about the social implications of the tech we use. More about Brian X. Chen

Explore Our Coverage of Artificial Intelligence

News  and Analysis

News Corp, the Murdoch-owned empire of publications like The Wall Street Journal and The New York Post, announced that it had agreed to a deal with OpenAI to share its content  to train and service A.I. chatbots.

The Silicon Valley company Nvidia was again lifted by sales of its A.I. chips , but it faces growing competition and heightened expectations.

Researchers at the A.I. company Anthropic claim to have found clues about the inner workings  of large language models, possibly helping to prevent their misuse and to curb their potential threats.

The Age of A.I.

D’Youville University in Buffalo had an A.I. robot speak at its commencement . Not everyone was happy about it.

A new program, backed by Cornell Tech, M.I.T. and U.C.L.A., helps prepare lower-income, Latina and Black female computing majors  for A.I. careers.

Publishers have long worried that A.I.-generated answers on Google would drive readers away from their sites. They’re about to find out if those fears are warranted, our tech columnist writes .

A new category of apps promises to relieve parents of drudgery, with an assist from A.I.  But a family’s grunt work is more human, and valuable, than it seems.

  • MyU : For Students, Faculty, and Staff

CS&E Announces 2024-25 Doctoral Dissertation Fellowship (DDF) Award Winners

Collage of headshots of scholarship recipients

Seven Ph.D. students working with CS&E professors have been named Doctoral Dissertation Fellows for the 2024-25 school year. The Doctoral Dissertation Fellowship is a highly competitive fellowship that gives the University’s most accomplished Ph.D. candidates an opportunity to devote full-time effort to an outstanding research project by providing time to finalize and write a dissertation during the fellowship year. The award includes a stipend of $25,000, tuition for up to 14 thesis credits each semester, and subsidized health insurance through the Graduate Assistant Health Plan.

CS&E congratulates the following students on this outstanding accomplishment:

  • Athanasios Bacharis (Advisor: Nikolaos Papanikolopoulos )
  • Karin de Langis (Advisor:  Dongyeop Kang )
  • Arshia Zernab Hassan (Advisors: Chad Myers )
  • Xinyue Hu (Advisors: Zhi-Li Zhang )
  • Lucas Kramer (Advisors: Eric Van Wyk )
  • Yijun Lin (Advisors: Yao-Yi Chiang )
  • Mingzhou Yang (Advisors: Shashi Shekhar )

Athanasios Bacharis

Athanasios Bacharis headshot

Bacharis’ work centers around the robot-vision area, focusing on making autonomous robots act on visual information. His research includes active vision approaches, namely, view planning and next-best-view, to tackle the problem of 3D reconstruction via different optimization frameworks. The acquisition of 3D information is crucial for automating tasks, and active vision methods obtain it via optimal inference. Areas of impact include agriculture and healthcare, where 3D models can lead to reduced use of fertilizers via phenotype analysis of crops and effective management of cancer treatments. Bacharis has a strong publication record, with two peer-reviewed conference papers and one journal paper already published. He also has one conference paper under review and two journal papers in the submission process. His publications are featured in prestigious robotic and automation venues, further demonstrating his expertise and the relevance of his research in the field.

Karin de Langis

Karin de Langis headshot

Karin's thesis works at the intersection of Natural Language Processing (NLP) and cognitive science. Her work uses eye-tracking and other cognitive signals to improve NLP systems in their performance and cognitive interpretability, and to create NLP systems that process language more similarly to humans. Her human-centric approach to NLP is motivated by the possibility of addressing the shortcomings of current statistics-based NLP systems, which often become stuck on explainability and interpretability, resulting in potential biases. This work has most recently been accepted and presented at SIGNLL Conference on Computational Natural Language Learning (CoNLL) conference which has a special focus on theoretically, cognitively and scientifically motivated approaches to computational linguistics.

Arshia Zernab Hassan

Arshia Zernab Hassan headshot

Hassan's thesis work delves into developing computational methods for interpreting data from genome wide CRISPR/Cas9 screens. CRISPR/Cas9 is a new approach for genome editing that enables precise, large-scale editing of genomes and construction of mutants in human cells. These are powerful data for inferring functional relationships among genes essential for cancer growth. Moreover, chemical-genetic CRISPR screens, where population of mutant cells are grown in the presence of chemical compounds, help us understand the effect the chemicals have on cancer cells and formulate precise drug solutions. Given the novelty of these experimental technologies, computational methods to process and interpret the resulting data and accurately quantify the various genetic interactions are still quite limited, and this is where Hassan’s dissertation is focused on. Her research extends to developing deep-learning based methods that leverage CRISPR chemical-genetic and other genomic datasets to predict cancer sensitivity to candidate drugs. Her methods on improving information content in CRISPR screens was published in the Molecular Systems Biology journal, a highly visible journal in the computational biology field. 

Xinyue Hu headshot

Hu's Ph.D. dissertation is concentrated on how to effectively leverage the power of artificial intelligence and machine learning (AI/ML) – especially deep learning – to tackle challenging and important problems in the design and development of reliable, effective and secure (independent) physical infrastructure networks. More specifically, her research focuses on two critical infrastructures: power grids and communication networks, in particular, emerging 5G networks, both of which not only play a critical role in our daily life but are also vital to the nation’s economic well-being and security. Due to the enormous complexity, diversity, and scale of these two infrastructures, traditional approaches based on (simplified) theoretical models and heuristics-based optimization are no longer sufficient in overcoming many technical challenges in the design and operations of these infrastructures: data-driven machine learning approaches have become increasingly essential. The key question now is: how does one leverage the power of AI/ML without abandoning the rich theory and practical expertise that have accumulated over the years? Hu’s research has pioneered a new paradigm – (domain) knowledge-guided machine learning (KGML) – in tackling challenging and important problems in power grid and communications (e.g., 5G) network infrastructures.

Lucas Kramer

Lucas Kramer headshot

Kramer is now the driving force in designing tools and techniques for building extensible programming languages, with the Minnesota Extensible Language Tools (MELT) group. These are languages that start with a host language such as C or Java, but can then be extended with new syntax (notations) and new semantics (e.g. error-checking analyses or optimizations) over that new syntax and the original host language syntax. One extension that Kramer created was to embed the domain-specific language Halide in MELT's extensible specification of C, called ableC. This extension allows programmers to specify how code working on multi-dimensional matrices is transformed and optimized to make efficient use of hardware. Another embeds the logic-programming language Prolog into ableC; yet another provides a form of nondeterministic parallelism useful in some algorithms that search for a solution in a structured, but very large, search space. The goal of his research is to make building language extensions such as these more practical for non-expert developers.  To this end he has made many significant contributions to the MELT group's Silver meta-language, making it easier for extension developers to correctly specify complex language features with minimal boilerplate. Kramer is the lead author of one journal and four conference papers on his work at the University of Minnesota, winning the distinguished paper award for his 2020 paper at the Software Language Engineering conference, "Strategic Tree Rewriting in Attribute Grammars".

Yijun Lin headshot

Lin’s doctoral dissertation focuses on a timely, important topic of spatiotemporal prediction and forecasting using multimodal and multiscale data. Spatiotemporal prediction and forecasting are important scientific problems applicable to diverse phenomena, such as air quality, ambient noise, traffic conditions, and meteorology. Her work also couples the resulting prediction and forecasting with multimodal (e.g., satellite imagery, street-view photos, census records, and human mobility data) and multiscale geographic information (e.g., census records focusing on small tracts vs. neighborhood surveys) to characterize the natural and built environment, facilitating our understanding of the interactions between and within human social systems and the ecosystem. Her work has a wide-reaching impact across multiple domains such as smart cities, urban planning, policymaking, and public health.

Mingzhou Yang

Mingzhou Yang headshot

Yang is developing a thesis in the broad area of spatial data mining for problems in transportation. His thesis has both societal and theoretical significance. Societally, climate change is a grand challenge due to the increasing severity and frequency of climate-related disasters such as wildfires, floods, droughts, etc. Thus, many nations are aiming at carbon neutrality (also called net zero) by mid-century to avert the worst impacts of global warming. Improving energy efficiency and reducing toxic emissions in transportation is important because transportation accounts for the vast majority of U.S. petroleum consumption as well as over a third of GHG emissions and over a hundred thousand U.S. deaths annually via air pollution. To accurately quantify the expected environmental cost of vehicles during real-world driving, Yang's thesis explores ways to incorporate physics in the neural network architecture complementing other methods of integration: feature incorporation, and regularization. This approach imposes stringent physical constraints on the neural network model, guaranteeing that its outputs are consistently in accordance with established physical laws for vehicles. Extensive experiments including ablation studies demonstrated the efficacy of incorporating physics into the model. 

Related news releases

  • Brock Shamblin Wins 2024 Riedl TA Award
  • Ph.D. Student Angel Sylvester Mentor’s High School Student
  • 2024 John T. Riedl Memorial Graduate Teaching Assistant Award
  • CS&E Earns Five Awards at 2023 SIAM SDM
  • CS&E Announces 2023-24 Doctoral Dissertation Fellowship (DDF) Award Winners
  • Future undergraduate students
  • Future transfer students
  • Future graduate students
  • Future international students
  • Diversity and Inclusion Opportunities
  • Learn abroad
  • Living Learning Communities
  • Mentor programs
  • Programs for women
  • Student groups
  • Visit, Apply & Next Steps
  • Information for current students
  • Departments and majors overview
  • Departments
  • Undergraduate majors
  • Graduate programs
  • Integrated Degree Programs
  • Additional degree-granting programs
  • Online learning
  • Academic Advising overview
  • Academic Advising FAQ
  • Academic Advising Blog
  • Appointments and drop-ins
  • Academic support
  • Commencement
  • Four-year plans
  • Honors advising
  • Policies, procedures, and forms
  • Career Services overview
  • Resumes and cover letters
  • Jobs and internships
  • Interviews and job offers
  • CSE Career Fair
  • Major and career exploration
  • Graduate school
  • Collegiate Life overview
  • Scholarships
  • Diversity & Inclusivity Alliance
  • Anderson Student Innovation Labs
  • Information for alumni
  • Get engaged with CSE
  • Upcoming events
  • CSE Alumni Society Board
  • Alumni volunteer interest form
  • Golden Medallion Society Reunion
  • 50-Year Reunion
  • Alumni honors and awards
  • Outstanding Achievement
  • Alumni Service
  • Distinguished Leadership
  • Honorary Doctorate Degrees
  • Nobel Laureates
  • Alumni resources
  • Alumni career resources
  • Alumni news outlets
  • CSE branded clothing
  • International alumni resources
  • Inventing Tomorrow magazine
  • Update your info
  • CSE giving overview
  • Why give to CSE?
  • College priorities
  • Give online now
  • External relations
  • Giving priorities
  • CSE Dean's Club
  • Donor stories
  • Impact of giving
  • Ways to give to CSE
  • Matching gifts
  • CSE directories
  • Invest in your company and the future
  • Recruit our students
  • Connect with researchers
  • K-12 initiatives
  • Diversity initiatives
  • Research news
  • Give to CSE
  • CSE priorities
  • Corporate relations
  • Information for faculty and staff
  • Administrative offices overview
  • Office of the Dean
  • Academic affairs
  • Finance and Operations
  • Communications
  • Human resources
  • Undergraduate programs and student services
  • CSE Committees
  • CSE policies overview
  • Academic policies
  • Faculty hiring and tenure policies
  • Finance policies and information
  • Graduate education policies
  • Human resources policies
  • Research policies
  • Research overview
  • Research centers and facilities
  • Research proposal submission process
  • Research safety
  • Award-winning CSE faculty
  • National academies
  • University awards
  • Honorary professorships
  • Collegiate awards
  • Other CSE honors and awards
  • Staff awards
  • Performance Management Process
  • Work. With Flexibility in CSE
  • K-12 outreach overview
  • Summer camps
  • Outreach events
  • Enrichment programs
  • Field trips and tours
  • CSE K-12 Virtual Classroom Resources
  • Educator development
  • Sponsor an event

MIT Technology Review

  • Newsletters

Five ways criminals are using AI

Generative AI has made phishing, scamming, and doxxing easier than ever.

  • Melissa Heikkilä archive page

the head of a glowing crocodile with stacks of coins in its open mouth

Artificial intelligence has brought a big boost in productivity—to the criminal underworld. 

Generative AI provides a new, powerful tool kit that allows malicious actors to work far more efficiently and internationally than ever before, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro. 

Most criminals are “not living in some dark lair and plotting things,” says Ciancaglini. “Most of them are regular folks that carry on regular activities that require productivity as well.”

Last year saw the rise and fall of WormGPT , an AI language model built on top of an open-source model and trained on malware-related data, which was created to assist hackers and had no ethical rules or restrictions. But last summer, its creators announced they were shutting the model down after it started attracting media attention. Since then, cybercriminals have mostly stopped developing their own AI models. Instead, they are opting for tricks with existing tools that work reliably. 

That’s because criminals want an easy life and quick gains, Ciancaglini explains. For any new technology to be worth the unknown risks associated with adopting it—for example, a higher risk of getting caught—it has to be better and bring higher rewards than what they’re currently using. 

Here are five ways criminals are using AI now. 

The  biggest use case for generative AI among criminals right now is phishing, which involves trying to trick people into revealing sensitive information that can be used for malicious purposes, says Mislav Balunović, an AI security researcher at ETH Zurich. Researchers have found that the rise of ChatGPT has been accompanied by a huge spike in the number of phishing emails . 

Spam-generating services, such as GoMail Pro, have ChatGPT integrated into them, which allows criminal users to translate or improve the messages sent to victims, says Ciancaglini. OpenAI’s policies restrict people from using their products for illegal activities, but that is difficult to police in practice, because many innocent-sounding prompts could be used for malicious purposes too, says Ciancaglini. 

OpenAI says it uses a mix of human reviewers and automated systems to identify and enforce against misuse of its models, and issues warnings, temporary suspensions and bans if users violate the company’s policies. 

“We take the safety of our products seriously and are continually improving our safety measures based on how people use our products,” a spokesperson for OpenAI told us. “We are constantly working to make our models safer and more robust against abuse and jailbreaks, while also maintaining the models’ usefulness and task performance,” they added. 

In a report from February, OpenAI said it had closed five accounts associated with state-affiliated malicous actors. 

Before, so-called Nigerian prince scams, in which someone promises the victim a large sum of money in exchange for a small up-front payment, were relatively easy to spot because the English in the messages was clumsy and riddled with grammatical errors, Ciancaglini. says. Language models allow scammers to generate messages that sound like something a native speaker would have written. 

“English speakers used to be relatively safe from non-English-speaking [criminals] because you could spot their messages,” Ciancaglini says. That’s not the case anymore. 

Thanks to better AI translation, different criminal groups around the world can also communicate better with each other. The risk is that they could coordinate large-scale operations that span beyond their nations and target victims in other countries, says Ciancaglini.

Deepfake audio scams

Generative AI has allowed deepfake development to take a big leap forward, with synthetic images, videos, and audio looking and sounding more realistic than ever . This has not gone unnoticed by the criminal underworld.

Earlier this year, an employee in Hong Kong was reportedly scammed out of $25 million after cybercriminals used a deepfake of the company’s chief financial officer to convince the employee to transfer the money to the scammer’s account. “We’ve seen deepfakes finally being marketed in the underground,” says Ciancaglini. His team found people on platforms such as Telegram showing off their “portfolio” of deepfakes and selling their services for as little as $10 per image or $500 per minute of video. One of the most popular people for criminals to deepfake is Elon Musk, says Ciancaglini. 

And while deepfake videos remain complicated to make and easier for humans to spot, that is not the case for audio deepfakes. They are cheap to make and require only a couple of seconds of someone’s voice—taken, for example, from social media—to generate something scarily convincing.

In the US, there have been high-profile cases where people have received distressing calls from loved ones saying they’ve been kidnapped and asking for money to be freed, only for the caller to turn out to be a scammer using a deepfake voice recording. 

“People need to be aware that now these things are possible, and people need to be aware that now the Nigerian king doesn’t speak in broken English anymore,” says Ciancaglini. “People can call you with another voice, and they can put you in a very stressful situation,” he adds. 

There are some for people to protect themselves, he says. Ciancaglini recommends agreeing on a regularly changing secret safe word between loved ones that could help confirm the identity of the person on the other end of the line. 

“I password-protected my grandma,” he says.  

Bypassing identity checks

Another way criminals are using deepfakes is to bypass “know your customer” verification systems. Banks and cryptocurrency exchanges use these systems to verify that their customers are real people. They require new users to take a photo of themselves holding a physical identification document in front of a camera. But criminals have started selling apps on platforms such as Telegram that allow people to get around the requirement. 

They work by offering a fake or stolen ID and imposing a deepfake image on top of a real person’s face to trick the verification system on an Android phone’s camera. Ciancaglini has found examples where people are offering these services for cryptocurrency website Binance for as little as $70. 

“They are still fairly basic,” Ciancaglini says. The techniques they use are similar to Instagram filters, where someone else’s face is swapped for your own. 

“What we can expect in the future is that [criminals] will use actual deepfakes … so that you can do more complex authentication,” he says. 

technology intelligence thesis

Jailbreak-as-a-service

If you ask most AI systems how to make a bomb, you won’t get a useful response.

That’s because AI companies have put in place various safeguards to prevent their models from spewing harmful or dangerous information. Instead of building their own AI models without these safeguards, which is expensive, time-consuming, and difficult, cybercriminals have begun to embrace a new trend: jailbreak-as-a-service. 

Most models come with rules around how they can be used. Jailbreaking allows users to manipulate the AI system to generate outputs that violate those policies—for example, to write code for ransomware or generate text that could be used in scam emails. 

Services such as EscapeGPT and BlackhatGPT offer anonymized access to language-model APIs and jailbreaking prompts that update frequently. To fight back against this growing cottage industry, AI companies such as OpenAI and Google frequently have to plug security holes that could allow their models to be abused. 

Jailbreaking services use different tricks to break through safety mechanisms, such as posing hypothetical questions or asking questions in foreign languages. There is a constant cat-and-mouse game between AI companies trying to prevent their models from misbehaving and malicious actors coming up with ever more creative jailbreaking prompts. 

These services are hitting the sweet spot for criminals, says Ciancaglini. 

“Keeping up with jailbreaks is a tedious activity. You come up with a new one, then you need to test it, then it’s going to work for a couple of weeks, and then Open AI updates their model,” he adds. “Jailbreaking is a super-interesting service for criminals.”

Doxxing and surveillance

AI language models are a perfect tool for not only phishing but for doxxing (revealing private, identifying information about someone online), says Balunović. This is because AI language models are trained on vast amounts of internet data, including personal data, and can deduce where, for example, someone might be located.

As an example of how this works, you could ask a chatbot to pretend to be a private investigator with experience in profiling. Then you could ask it to analyze text the victim has written, and infer personal information from small clues in that text—for example, their age based on when they went to high school, or where they live based on landmarks they mention on their commute. The more information there is about them on the internet, the more vulnerable they are to being identified. 

Balunović was part of a team of researchers that found late last year that large language models, such as GPT-4, Llama 2, and Claude, are able to infer sensitive information such as people’s ethnicity, location, and occupation purely from mundane conversations with a chatbot. In theory, anyone with access to these models could use them this way. 

Since their paper came out, new services that exploit this feature of language models have emerged. 

While the existence of these services doesn’t indicate criminal activity, it points out the new capabilities malicious actors could get their hands on. And if regular people can build surveillance tools like this, state actors probably have far better systems, Balunović says. 

“The only way for us to prevent these things is to work on defenses,” he says.

Companies should invest in data protection and security, he adds. 

Artificial intelligence

Sam altman says helpful agents are poised to become ai’s killer function.

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

  • James O'Donnell archive page

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Taking AI to the next level in manufacturing

Reducing data, talent, and organizational barriers to achieve scale.

  • MIT Technology Review Insights archive page

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

IMAGES

  1. Top 10 Innovative Artificial Intelligence Thesis Ideas [Professional

    technology intelligence thesis

  2. Artificial Intelligence AI Thesis [Novel Research Proposal]

    technology intelligence thesis

  3. artificial intelligence thesis mit

    technology intelligence thesis

  4. (PDF) A Study on Artificial Intelligence Technologies and its

    technology intelligence thesis

  5. Innovative High Quality Artificial Intelligence Thesis Proposals

    technology intelligence thesis

  6. Artificial Intelligence, Are the Machines Taking over Free Essay Example

    technology intelligence thesis

VIDEO

  1. How do I write my PhD thesis about Artificial Intelligence, Machine Learning and Robust Clustering?

  2. ScholarWriterAI

  3. PhD Programme at IIMB: PhD scholar Sai Dattathrani, Information Systems area

  4. Tran Nhiem's PhD Defense

  5. How to Write Research Paper / Thesis Using Chat GPT 4 / AI (Artificial Intelligence)

  6. Interview on Using Big Data and Machine Learning to Understand the Impact of Digitization

COMMENTS

  1. Technology intelligence: Methods and capabilities for generation of knowledge and decision making

    Thesis. Full-text available. Feb 2019; Tamara Al-Qaryouti; ... Capturing technology intelligence can reveal significant opportunities for an enterprise, but the effort must be organized, must use ...

  2. The role of Artificial Intelligence in future technology

    at our disposal, AI is going to add a new level of ef ficiency and. sophistication to future technologies. One of the primary goals of AI field is to produce fully au-. tonomous intelligent ...

  3. PDF Artificial Intelligence and Machine Learning Capabilities and

    B.E. Computer Science and Technology, The University of Electronic Science and Technology of China, 2016 . ... copies of this thesis document in whole or in part . in any medium now known or hereafter created. ... intelligence is the largest collection, machine learning is a subset of artificial intelligence, and ...

  4. PDF Technology Intelligence for Continuous Innovation:

    tools and informal networking for the effective communication of technology intelligence results. As a whole, my thesis depicts the current technology intelligence practices in the manufacturing industry, highlights the challenges and opportunities of technology intelligence along the trend of digitalization, and clarifies the mechanisms of ...

  5. Understanding Artificial Intelligence Adoption, Implementation, and Use

    Artificial intelligence (AI) has become the technology of choice to solve complex business problems in various industrial sectors where small and medium enterprises (SMEs) are present. Many researchers worked on building technology-oriented solutions for solving business-critical issues. However, as AI adoption, implementation, and use

  6. PDF Topic-based Analysis for Technology Intelligence

    concept and tools of technology intelligence aims to handle this issue. In the current technology intelligence research, one of the big challenges is that, the ... properties of typical semi-structured technology indicators. Then this thesis proposes a framework of topic-based technology intelligence, with three main functionalities, ...

  7. PDF Technology Acceptance Model: Which factors drive the acceptance ...

    Master Thesis Behavioural Economics Technology Acceptance Model: Which factors drive the acceptance of AI among employees? Abstract In recent years, arti cial intelligence (AI) has rapidly moved from an ideal concept to a technology that can be deployed. Due to its capabilities of mimicking human intelligence, many rms in various industries ...

  8. PDF The implementation of artificial intelligence and its future ...

    Technology has geometrically progressed during the last few decades, and subsequently artificial intelligence (AI) research reached new peaks. Today, AI plays a sizeable role across many economic sectors throughout the world. Furthermore, AI seems to be here to stay which is evident from the mass digitization of the world.

  9. Full article: Technology Intelligence and Digitalization in the

    Technology intelligence is a crucial activity for firms that are trying to keep abreast of rapid technological change. Yet technology intelligence is hardly integrated into companies' strategic decision-making processes—particularly in manufacturing companies. We explore how the case study firms' existing processes compare to those of the ...

  10. PDF The use of artificial intelligence (AI) in thesis writing

    Text generator (chatbot) based on artificial intelligence and developed by the company OpenAI. Aims to generate conversations that are as human-like as possible. Transforms input into output by "language modeling" technique. Output texts are generated as the result of a probability calculation.

  11. PDF Project Management through the lens of Artificial Intelligence

    CHALMERS UNIVERSITY OF TECHNOLOGY Gothenburg, Sweden 2018 Project Management through the lens of Artificial Intelligence A Mixed-Methods Research into How AI Systems can Support Project Managers to become more Efficient in their Daily Work Master's thesis in the Master's Program International Project Management E2018:066 ANNAAM BUTT

  12. PDF Master in Artificial Intelligence Master Thesis

    Master in Artificial Intelligence Master Thesis Analysis of Explainable Artificial Intelligence on Time Series Data Author: Supervisors: NataliaJakubiak MiquelSànchez-Marrè ... In this chapter, important background concepts for the understanding of this Thesis will be ex-= = = = = ...

  13. PDF The Impact of Artificial Intelligence on Innovation

    ABSTRACT. Artificial intelligence may greatly increase the efficiency of the existing economy. But it may have an even larger impact by serving as a new general-purpose "method of invention" that can reshape the nature of the innovation process and the organization of R&D.

  14. Dissertations / Theses: 'Technology Intelligence'

    This thesis covers the systematic mapping of established public Application Programming Interface (API)s that are employing the Artificial Intelligence (AI) technology. This due to the fact that the problem has been the lack of systematic maps of AI APIs in the present time, therefore this thesis has the purpose of increasing the insight in the ...

  15. Thesis: A strategic perspective on the commercialization of artificial

    The field of Artificial Intelligence has a rich set of literature for modeling of technical systems that implement Machine Learning and Deep Learning methods. This thesis attempts to connect the literature for business and technology and for evolution and adoption of technology to the emergent properties of Artificial Intelligence systems.

  16. The impact of introducing artificial intelligence technology to

    The problem addressed in this research is understanding the nature evolving and new technologies into the domain of architectural design and building technology. This thesis, essentially, is an exploration of the ways that "Artificial Intelligence" techniques may support systematic and rational architectural design and, by extension, the ...

  17. Exploring the impact of artificial intelligence on teaching and

    This paper explores the phenomena of the emergence of the use of artificial intelligence in teaching and learning in higher education. It investigates educational implications of emerging technologies on the way students learn and how institutions teach and evolve. Recent technological advancements and the increasing speed of adopting new technologies in higher education are explored in order ...

  18. Artificial Intelligence · University of Basel · Completed Theses

    Master's thesis, December 2022.Download: (PDF) (slides; PDF) (sources; ZIP) A Digital Microfluidic Biochip (DMFB) is a digitally controllable lab-on-a-chip. Droplets of fluids are moved, merged and mixed on a grid. Routing these droplets efficiently has been tackled by various different approaches.

  19. (PDF) Artificial Intelligence in Project Management: Systematic

    Artificial intelligence (AI) is transforming various domains of human activity, including project management. This paper provides a systematic literature review of AI applications in project ...

  20. FIU Libraries: Artificial Intelligence: Dissertations & Theses

    Many universities provide full-text access to their dissertations via a digital repository. If you know the title of a particular dissertation or thesis, try doing a Google search. OATD (Open Access Theses and Dissertations) Aims to be the best possible resource for finding open access graduate theses and dissertations published around the world with metadata from over 800 colleges ...

  21. The Intelligence Edge: Opportunities and Challenges from ...

    THE ISSUE. Emerging technologies such as artificial intelligence have the potential to transform and empower the U.S. Intelligence Community (IC) while simultaneously presenting unprecedented challenges from technologically capable adversaries. These technologies can help expand, automate, and sharpen the collection and processing of intelligence, augment analysts' ability to craft strategic ...

  22. PDF The impact of artificial intelligence amongst higher education stu-

    The ways artificial intelligence could positively impact student collaboration, and interaction, or other specific skills, can be researched. Also, the teacher and parent views and attitude towards the use of artificial intelligence in education could be studied.

  23. PDF Information Technology: Doctoral Theses

    In this thesis, I examine the causal relationships among products, social influence and network-embedded human behaviors, in the context of social advertising. Social advertising places social cues (e.g., likes) in ads, utilizing the power of social influence (the effects of social cues in ads) to encourage ad engagement.

  24. AI

    'Artificial Intelligence as a Driving Force for the Economy and Society' is a key theme at the World Economic Forum's Annual Meeting. Advances in technology have the potential to help us solve global challenges, but innovation and guardrails are essential.

  25. OpenAI Unveils New ChatGPT That Listens, Looks and Talks

    On Monday, the San Francisco artificial intelligence start-up unveiled a new version of its ChatGPT chatbot that can receive and respond to voice commands, images and videos.

  26. Can Artificial Intelligence Make the PC Cool Again?

    Microsoft, HP, Dell and others unveiled a new kind of laptop tailored to work with artificial intelligence. Analysts expect Apple to do something similar. By Karen Weise and Brian X. Chen Karen ...

  27. CS&E Announces 2024-25 Doctoral Dissertation Fellowship (DDF) Award

    Seven Ph.D. students working with CS&E professors have been named Doctoral Dissertation Fellows for the 2024-25 school year. The Doctoral Dissertation Fellowship is a highly competitive fellowship that gives the University's most accomplished Ph.D. candidates an opportunity to devote full-time effort to an outstanding research project by providing time to finalize and write a dissertation ...

  28. Five ways criminals are using AI

    Generative AI has made phishing, scamming, and doxxing easier than ever. Artificial intelligence has brought a big boost in productivity—to the criminal underworld. Generative AI provides a new ...

  29. 1 Small-Cap Artificial Intelligence (AI) Stock That Could Be a Monster

    SoundHound's technology is finding success in two industries. Nvidia owns shares of SoundHound stock. The company is far from turning profitable. SoundHound AI could be a monster winner, but it's ...

  30. US, China Vie for Edge in Artificial Intelligence Amid Push for Ground

    For seven hours at a Geneva hotel earlier this month, top US and Chinese officials locked into talks on collectively managing their biggest fears around artificial intelligence — while keeping ...