AUGUSTA — On Oct. 31, the Maine AI Task Force released a 64-page report on artificial intelligence goals for the state.
When the task force was initially convened, the plan was to look at the technology to prepare Maine’s economy and workforce for the coming AI technologies, to protect Maine residents from the harmful effects of artificial intelligence, and to use AI in state agencies and other entities such as municipalities.
Ultimately, the team also looked at AI technologies in education, healthcare, and in nonprofit agencies.
The team identified AI as computer systems that perform tasks by mimicking human-like intelligence through specific tasks — pattern recognition, predictive modeling, language processing, and content generation. Earlier attempts to establish computer-based “thinking” machines relied on logic-based coding, such as “if/then” models, deterministic analysis, or mechanistic processing — in other words, mathematical modeling, and inductive and deductive reasoning. Modern AI works by analyzing large amounts of so-called “training data”, making inferences based on patterns. This became possible due to stronger and smaller computers, datasets that including human reasoning, and better training protocols. One such type of AI is called Generative AI, which uses Large Language Models that are trained on huge quantities of data — for instance, the machines might “read” thousands of novels to produce a new, AI-generated novel that sounds like a human may have written it.
The report focused on several key areas specific to Maine — the economy, the workforce, education, healthcare, and the public sector. It also points out concerns that may arise as part of the AI plan or inadvertently or with bad intent, such as the loss of low- and middle-level jobs (although hoping AI training might include new jobs for displaced workers), the importance of safeguarding consumer and healthcare data while using that data to improve healthcare or markets, attempting to produce datasets that are not biased, and malicious use of AI such as deepfake construction or use by students to create material that they did not write.
In the economy and workforce, the task force recommended evaluation of how AI would affect Maine workers and labor markets generally, and how to expand entrepreneurial assistance for AI-enabled start-ups and small businesses, while enhancing cybersecurity, improving access to advanced computing resources, and working with the Legislature to provide a regulatory framework that is predictable enough to allow for the safe adoption of AI. Maine’s broadband network and energy infrastructure would also have to improve to prepare for AI’s impacts.
In education, the task force recommended that teachers are trained in AI , allowing them to teach their students and peers about the new systems, and recommended that AI literacy be embedded into the curriculum for graduating seniors and adult education.
In healthcare, the task force believed that Maine could become a leader in AI-led healthcare. Several companies are already using AI for emerging medical and scientific advances. They recommend healthcare professionals be trained, and the regulatory framework and laws be established to help Mainers make use of the technology while mitigating potential privacy risks.
In the public sector, the task force said that AI should be a policy priority across the state, including quasi-state agencies, while increasing public transparency about how the tools are deployed. Municipalities can also benefit from AI, the task force says, with the state helping to develop technology plans, finding opportunities for funding, and determining how well AI can deal with critical infrastructure challenges.
At the same time, the task force has some concerns about potentially harmful uses of AI . Among these concerns is safeguarding consumer and healthcare data, privacy, especially deepfake technology, mitigating bias in datasets. AI is only as good as the material it “reads”, so a primarily Eurocentric diet of western civilization literature and science would necessarily produce a skewed idea of cultures of the world. They are also concerned about how courts will view AI and the need to create new policy and law around the technology.
