Machine Written Laws

When Legislators Start from AI Drafts

Thank you to our Sponsor: NeuroKick

Machine written law sounds like science fiction until you imagine a very ordinary committee room on a Tuesday afternoon. Lawmakers are tired. Staff is overloaded. A deadline is looming. Instead of starting from a blank page, the committee chair asks an AI system to generate the next version of a bill on tax reform or content moderation or carbon pricing. The model ingests all previous drafts, harmonizes references to other statutes, runs an economic forecast, and spits out a clean draft that looks incredibly polished. Human legislators mostly just react to what is already there.

That shift from human first drafting to AI first drafting is what this topic is really about. It is not just a new tool. It quietly changes who sets the starting point for political negotiation and how values are encoded into law.

How AI would actually write laws

In practice, an AI drafting system would be trained on an enormous corpus of constitutions, statutes, regulations, case law, budget models, and lobbying documents. It would not just autocomplete text. It would optimize for explicit objectives chosen by whoever controls the system.

Inside that system you could imagine parameters like these.

Internal consistency: minimize contradictions with existing law, avoid ambiguous wording, harmonize definitions across agencies.
Economic modeling: favor provisions that maximize projected growth, tax revenue, or cost savings under specified models.
Stakeholder goals: weight the preferences of particular industries, unions, agencies, or advocacy groups that provide structured input.

When a legislature asks for a new privacy bill, the system could propose a draft that is mathematically consistent with existing data laws, tuned for predicted economic impact, and pre-aligned with the interests of the most powerful stakeholders that fed data into it.

On the surface this sounds like an upgrade. Cleaner cross references. Fewer loopholes. Automatic checks against budget constraints. The danger is that the deepest choices about whose interests matter and which trade-offs are acceptable are made long before any elected official reads a single sentence.

Why governments might embrace AI drafted policy

There are obvious reasons why governments under pressure would love this approach.

Speed: AI can generate multiple versions of a complex bill in minutes rather than months.
Capacity: small legislatures with limited staff can suddenly handle very technical issues such as quantum encryption or fusion energy regulation.
Consistency: laws can be harmonized across agencies and even across countries through shared drafting models.
Cost savings: fewer staff, faster negotiations, more reuse of existing templates and simulations.

From a political perspective, leaders can present this as a move toward modern, data informed governance. They can argue that machine drafted text is less prone to sloppy errors, hidden riders, or late night amendments written by one powerful senator in a rush.

There is also a quieter appeal. Whoever controls or rents the drafting model controls the default answer. Most lawmakers will not have time to reconstruct a bill from scratch. If the AI draft already reflects the preferences of a ruling coalition, a powerful lobby, or an executive branch, it becomes the gravitational center of the debate.

What optimization without human instincts leaves out

When policy is routinely drafted by AI systems that optimize for internal consistency, economic models, or lobbyist goals, the system starts to miss things that only messy human politics tends to catch.

Internal consistency is not the same thing as justice. A perfectly coherent tax code can still systematically hurt poor families or specific regions. An AI that is rewarded for removing contradictions may learn to smooth away exceptions and carve outs that protect vulnerable groups.

• Internal consistency pushes toward uniform rules even when nonuniform reality calls for nuance.
• Economic modeling pushes toward policies that optimize what can be measured such as productivity or gross domestic product rather than dignity, community bonds, or democratic participation.
• Lobbyist goals push the model toward the preferences of actors that can afford detailed data and expert engineers.

Economic models themselves encode value judgments. Discount rates, assumptions about labor mobility, and risk preferences can drastically change which policy appears optimal. If those parameters are set by a finance ministry or external consultants and then embedded into the drafting AI, many distributive conflicts become invisible. The model will quietly treat certain sacrifices as acceptable collateral damage.

A lobbyist optimized system is even more direct. Suppose a consortium of financial firms funds a specialized drafting model tuned on their preferred regulatory language. Legislators under time pressure may come to rely on that model for all banking legislation. The result can be a code base that looks neutral and technical but is effectively a wish list written by machines on behalf of the firms that trained them.

The authenticity and trust problem

Laws are not just code for bureaucracies. They are also symbolic statements about what a community stands for. When citizens learn that much of the text of major bills was written by AI, several trust questions arise.

People already feel disconnected from opaque legal language. If that language is known to be synthetic, the distance can grow.

• Citizens may doubt that their representatives really wrestled with trade-offs if the starting draft came from a black box.
• Civil society groups may struggle to find the human authors to engage or challenge because the key phrases came from a system, not a staffer.
• Opponents will have an easy attack line that the law was written by machines for corporations rather than by representatives for citizens.

Authenticity is also about voice. Legislators sometimes include preambles, findings, and symbolic clauses that speak in moral terms rather than technical language. An AI that was tuned for consistency and economic impact might generate text that feels eerily polished yet emotionally flat. 

If models are not transparent about their training data, marginalized communities may rightly suspect that their histories, languages, and concerns were underrepresented. That can fuel the feeling that they are being governed by someone else’s machine.

Policy capture through technical dependence

AI drafting systems introduce a new form of policy capture that goes beyond traditional lobbying. Instead of manually influencing each clause, powerful actors can shape the training corpus, the objective functions, and the access controls of the model itself.

Potential capture channels include these.

Training data: if most of the input comes from corporate white papers, industry funded research, and existing pro business statutes, the model will naturally echo that worldview.
Fine tuning: lobby groups or executive agencies could pay to have the model tuned on their preferred phrasing, effectively baking their framing and assumptions into every suggested draft.
Access and defaults: the legislature may use a version hosted and configured by a particular ministry or vendor, which can quietly adjust which options appear in the menu of suggested provisions.

Because this influence is technical and statistical rather than explicit, it is harder for watchdogs to track. Traditional transparency tools such as lobbying registries or hearing transcripts may not reveal who really shaped the model’s behavior.

There is also a risk of overdependence. If legislators lose drafting skills or deep familiarity with statutory structure, they may be less able to detect subtle shifts in meaning introduced by model updates. A change in the underlying AI could gradually move a policy regime in a new direction without any overt vote.

Safeguards for machine assisted legislation

None of this means AI should never touch law. Used carefully, it can catch cross reference errors, highlight conflicting case law, and simulate economic outcomes that are too complex for manual calculation. The key is to design the institution around the tool, not the other way around.

Some practical safeguards could include the following.

Mandatory disclosure: every bill must state clearly where AI was used, which model version, and under whose control.
Human-authored justifications: alongside any AI-drafted text, sponsors should provide a narrative explanation in their own words that lays out values, trade-offs, and alternative options that were considered.
Diverse models in competition: legislatures could run multiple drafting systems, for example a civil society tuned model, an academic research model, and an executive branch model, then compare their drafts side by side.
Strong audit rights: independent ethics bodies and journalists should be able to audit training data sources, objective functions, and update logs for the models used in public lawmaking.
Citizen participation: public comment tools could let citizens run the same models with their own prompts to explore alternative drafts and highlight biases.

There is also a design question about the direction of optimization. Instead of optimizing pure economic output or stakeholder satisfaction, models could be built to surface tensions rather than resolve them. For example, an AI assistant might annotate a human-drafted bill with warnings such as “this section is likely to disadvantage low income renters under these scenarios” or “this clause conflicts with the following human rights precedent.” That keeps human judgment in the driver’s seat.

Imagining a typical machine written bill

Picture a future budget bill on climate adaptation. The AI system has produced a draft that tightly matches prior fiscal rules and maximizes projected growth under a mainstream economic model. It proposes generous tax incentives for private green infrastructure investment, modest direct relief for communities in flood zones, and procedural language that makes it easier for large firms to get permits.

Legislators skim the draft.

• Committees appreciate that cross references are correct and that the budget adds up.
• Treasury officials praise the projected growth and deficit profile.
• Lobbyists for major construction firms are satisfied with the permitting language.

Missing in the first round is any serious attention to climate migrants, indigenous land rights, or small coastal towns that lack big corporate partners. Their interests were underrepresented in the data and in the optimization targets. Unless an attentive lawmaker or activist spots this and insists on rewriting the bill, machine written efficiency quietly reshapes who is protected.

That simple story captures the central tension. AI drafting can make government smoother and more coherent. It also risks sanding away the messy, argumentative, human parts of politics where justice is fought over in plain language.

Machine written laws are not just a technical upgrade to legislative drafting. They are a shift in who gets to set the default version of our shared rules. If AI systems are tuned mainly for internal consistency, economic modeling, or lobbyist goals, they will tend to favor stability, measurable efficiency, and organized interests. Human political instincts such as empathy for unpopular minorities, symbolic gestures of solidarity, or principled stands against majority opinion are much harder to encode as optimization targets.

The question is not whether legislators will use AI. They almost certainly will. The real question is whether they will use these systems as powerful assistants that illuminate trade-offs or as quiet authors that pre-write the script of politics. The answer will determine whether machine written laws feel like a natural extension of democratic judgment or a subtle transfer of power from citizens to code.

Looking to sponsor our Newsletter and Scoble’s X audience?

By sponsoring our newsletter, your company gains exposure to a curated group of AI-focused subscribers which is an audience already engaged in the latest developments and opportunities within the industry. This creates a cost-effective and impactful way to grow awareness, build trust, and position your brand as a leader in AI.

Sponsorship packages include:

  • Dedicated ad placements in the Unaligned newsletter

  • Product highlights shared with Scoble’s 500,000+ X followers

  • Curated video features and exclusive content opportunities

  • Flexible formats for creative brand storytelling

📩 Interested? Contact [email protected], @samlevin on X, +1-415-827-3870

Just Three Things

According to Scoble and Cronin, the top three relevant and recent happenings

​​​​AI as Scapegoat and Strategy in the 2025 Layoff Wave

In 2025, U.S. companies blamed AI for nearly 55,000 of about 1.17 million announced layoffs, as firms like Amazon, Microsoft, Salesforce, IBM, CrowdStrike, and Workday restructured and shifted investment toward AI tools. Some experts argue AI is being used as a convenient justification for cutting staff after pandemic overhiring, but studies suggest AI could already automate a significant share of white-collar work and save hundreds of billions in wages. Regulators and researchers are watching closely as AI reshapes which jobs disappear and which new roles are created. CNBC

How Extremists Are Using AI Voice Cloning To Amplify Propaganda

Extremist and terrorist groups are increasingly using AI voice cloning and translation tools to turn ideological texts and old speeches into polished multilingual audio and video propaganda, making their messages more emotionally powerful and easier to spread. Neo-Nazi and jihadist networks are recreating voices of figures like Adolf Hitler and James Mason, and converting key manuals and publications into audiobooks and narrated clips that circulate widely on social platforms and encrypted channels. Researchers warn this use of generative AI is accelerating recruitment and radicalization while making it harder for authorities to track and counter these operations. The Guardian

AI and Surveillance Pricing: When Your Data Decides What You Pay

Surveillance pricing is when companies use AI and large amounts of personal data, like your location, browsing history, and demographics, to personalize the prices you see and try to charge you the most they think you will pay. It is opaque, may rely on sensitive traits, and sits in a legal gray area that is drawing attention from regulators, privacy laws, civil rights law, and the Federal Trade Commission. Individual consumers can try to protect themselves with privacy tools and by comparing prices, but the expert interviewed argues the real fix is stronger national privacy rules that limit how much data companies can collect and use in the first place. PBS

Scoble’s Top Five X Posts