Feature

The Pacing Problem Unplugged Part 2

The Pacing Problem Unplugged Part 2

By Srivathsan Karanai Margan

Editor’s Note: This is the second of a two-part series examining how technological innovation has thrown up new hurdles to the regulatory process. The first part, “The Pacing Problem Unplugged Part 1,” appeared in the January/February 2025 issue of Contingencies.

Reining in the Pacing Problem

Considering the sudden increase in the oversight burden on regulators, there are only two options available to address the pacing problem: control the pace of technological innovations (either to slow down or stop them), or to reinvigorate the regulatory entities and experiment with different rulemaking approaches. While the former is becoming nearly impossible in the current technological landscape, the latter appears to be the more practical option.

Technology Foresight

In the context of informational-intelligence age technologies, an oft-heard rulemaking rhetoric is to “start early.” The early start does not only refer to the time when the actual rulemaking process begins but also to when regulators become aware of the genesis of the technologies. Technology foresight is a structured process that helps countries identify and assess future technologies, their applications, and their environment to shape long-term industry policies. This activity covers a long-term horizon (10 to 30 years) and a broad scope, considering all potential technologies across different stages of maturity. It not only helps forecast megatrends or breakthrough innovations but also aims to identify incremental technology developments focused on current needs. The core objective is to understand areas of research and emerging technologies and identify their potential implications and opportunities. This process helps draw useful insights for strategic planning, policymaking, and preparedness. If carried out properly, technological foresight can help countries identify where to focus their efforts and where not to. This would provide regulators with an early insight to channelize their efforts, create adequate guardrails, and prepare appropriate frameworks to guide the progress of technological innovations rather than be surprised later.

Regime Complex

Regime complex refers to networks of overlapping, intersecting, or conflicting regulatory regimes, laws, or standards that operate at national or international levels. Regulatory complexes may include more than one agreement or authority, which may be functional or territorial in nature. They are non-hierarchical and modular, where no institution holds authority over others, and the constituent parts can be designed for specific purposes.

These regimes can be established by different countries, institutions, or coalitions to address various aspects of an issue, leading to overlapping jurisdictions and fragmented authority. The presence of multiple regimes within a complex helps to build different perspectives, as each regime may respond to a challenge in a unique way.

Regime complexes are generally used in a variety of fields such as the environment, trade, and intellectual property rights. However, given the global reach, impact, risks, and combinatorial outcomes of information-intelligence age technologies, there is an increasing tendency to set up regime complexes for regulating them. The combinatorial outcome of some technologies overlaps different regulatory regimes involving human rights, data protection, intellectual property, and cybersecurity, thereby necessitating the need for regime complexes. For example, regulating AI/ML through a regime complex is seen as the most realistic path. Considering the complexities involved in regulating AI/ML, it requires an intense coordination among businesses, industry-specific-, national- and international- regulators as well as the various regime complexes across the local, regional, national and international landscapes.

Anticipatory Governance

Anticipatory governance is a decision-making method in which multiple stakeholders collaborate to predict potential outcomes and develop efficient methods to address issues at their early stages or prevent them altogether. The process involves exploring, envisioning, and developing an appropriate strategy for the technology.

The precautionary principle represents the most extreme form of anticipatory governance and serves as a suitable antidote to solve the Collingridge dilemma. It prioritizes public safety and delays the deployment of any technology until its risks are fully understood. The principle shifts the burden of proof to the innovators and creators of the technology, requiring them to prove that the innovations will not cause harm. It follows a “better-safe-than-sorry” approach, limiting or even prohibiting trial-and-error experimentation and risk-taking. By restricting the technology before it is extensively vetted, the precautionary principle could also alleviate the pacing problem to some extent.

The precautionary principle is often criticized for being a deterrent to innovation. However, when the technology for which rules are defined does not follow any known pathways and is advancing along an uncharted course, such a restrictive practice serves as a compelling quick fix to prevent harm. For example, the uncontrolled proliferation of tools using generative AI is triggering a new wave of disinformation. Technology enables users to effortlessly produce high-quality fake content (photos, videos, audio, or text) and distribute these sophisticated fakes to harass individuals, defame social groups, blackmail organizations, or destabilize political systems. Such abuse of technology makes a strong case for the application of the precautionary principle to prevent public harm.

Considering the nature of new-age technologies, the limitations of traditional rulemaking approaches, and the shortcomings of regulating entities, different rulemaking approaches and regulatory structures must be explored to enable faster rulemaking and address the escalating pacing problem. Despite being entangled in kludgeocracy, ossification, demosclerosis, and bureaucratic inertia, the need of the hour for the regulatory bodies is to foster the requisite knowledge to facilitate the alternative rulemaking approaches.

Soft Law Mechanisms

Soft law mechanisms are not new; they have existed for a long time. However, compared to hard law, soft law has not been widely used as the first choice of rulemaking for technologies. Although soft laws were originally developed in the international law context, they are increasingly becoming an integral component of the governance framework for emerging technologies. Unlike hard law, which involves detailed rulemaking approaches and outcomes that are legally binding on the parties involved and enforceable before a court, soft law encompasses a variety of non-statutory, non-binding norms and techniques that are not legally enforceable. Soft law shifts the responsibility for oversight from regulatory entities to a broad range of stakeholders and actors, including businesses, non-governmental agencies, and various forms of partnerships and collaborations. Soft laws take forms such as guidelines and recommendations, voluntary codes of conduct, declarations and resolutions, standards and certifications, memoranda of understanding, best practices and benchmarking, model laws and frameworks, industry agreements, and ethical charters and principles.

  • Best practices and benchmarking: Shared strategies or performance benchmarks developed collaboratively or by leading organizations to improve sector-wide outcomes. These are adopted across the technology sector to address emerging issues, such as ethical AI development or sustainable technology manufacturing.
  • Collaborative/co-regulation: The regulator collaborates with stakeholders from industry, academia, and consumer advocates to co-create standards and policies. There is shared control between industry and regulators to oversee and approve the standards. Engaging continuously with professional and standard setting bodies such as the Academy is a case in point. The approach balances industry expertise with regulatory oversight to ensure that regulations are practical. For regulators, having industry buy-in increases compliance rates.
  • Declarations and resolutions: Non-binding statements by international bodies or conferences expressing shared commitments or aspirations to address global technology challenges, such as digital equity or data privacy.
  • Ethical charters and principles: Statements emphasizing shared ethical values and responsibilities to address risks, such as AI bias or user manipulation.
  • Guidelines and recommendations: Non-binding advisories are issued by governments, international organizations, or regulatory bodies to provide directions on specific issues or practices. In the technology area, these offer flexible advice on ethical or technical issues such as AI ethics, data governance, or cybersecurity without imposing strict rules.
  • Industry agreements: Voluntary collective agreements within industries achieve shared goals or address specific challenges. They aid in establishing ethical norms, technical standards, or cooperative practices in response to regulatory gaps.
  • Memoranda of understanding: Collaborative agreements between governments and organizations that outline mutual understandings, shared objectives, principles, or expectations that are useful for managing emerging technology challenges such as cross-border data sharing or AI research.
  • Model laws and frameworks: Draft legislation or frameworks offered as templates for adoption or adaptation by jurisdictions to address tech-specific challenges like AI liability, digital platforms, or data protection, often harmonized globally.
  • Public-private partnerships: Collaborative frameworks where public and private entities voluntarily agree on specific roles and responsibilities to address technological risks like cybersecurity or digital divides.
  • Soft regulatory guidance: Non-binding interpretations, advisory opinions, or notes issued by regulators to clarify policies, provide interpretative guidance, or address emerging issues. These are aimed at clarifying legal obligations for emerging technologies like blockchain or autonomous vehicles.
  • Standards and certifications: Frameworks, standardized guidelines, or criteria set by private or public bodies that entities have to adopt voluntarily or mandatorily to demonstrate compliance or quality with the intent of providing self-regulation for a particular profession or group. In technology, these help to ensure interoperability, safety, and quality in technology systems and are widely used in fields like cybersecurity and IoT.
  • Voluntary codes of conduct: A set of principles or ethical standards that organizations voluntarily agree to follow. These are often developed by industry associations or professional bodies to promote self-regulation and responsible behavior among members to ensure accountability, ethical standards, or interoperability.
  • White papers and policy statements: Informational documents issued by governments or institutions to propose ideas, outline government or organizational positions and intentions, or solicit feedback on certain topics.

The soft law mechanisms offer some potential benefits that facilitate faster rulemaking. The rulemaking process is based on cooperation and collaboration rather than adversarial models of engagement. Due to this, the lengthy traditional process is avoided, and the standards can be arrived at, adopted, and revised at a relatively faster pace. The flexible and contextual formation of many working groups with different stakeholders allows many technologies to be pursued simultaneously. Many soft laws address cross-border implications of technology, such as data governance and cybersecurity. Soft law allows for iterative learning and adaptation as the implications of new technologies unfold. In due course, as the technologies stabilize, clear dominant designs and usage patterns emerge, and soft laws could be gradually hardened into prescriptive hard laws.

Despite being much publicized as the optimal pathway for rulemaking, soft laws have some limitations. The lack of enforcement powers of soft law makes compliance entirely dependent on the trust and integrity of the companies involved and their willingness to comply voluntarily. Despite conflicting interests, all the stakeholders must collaborate in the rulemaking process to agree on the norms. Compliance action pathways of these companies must adhere to the spirit of the agreement. Companies with vested interests could use deceptive pathways to whitewash and project as though they are complying with the soft laws and the core problem is addressed. This shortcoming was exposed recently in the context of the requirement for companies to demonstrate the steps taken toward environmental, social, and governance (ESG) criteria. The broad guidelines that lacked any legal binding led several companies to adopt greenwashing, which is a deceptive practice to create a false impression or provide misleading information that their products, services, or overall operations are environmentally sound. The lack of transparency, external checks, and enforcement powers makes soft laws appear weak and unable to provide the same level of confidence that hard laws do.

Adaptive Regulation

Given the pace of growth and complexity of recent technologies, it is impossible to plot the complete growth curve of the technology and the probable risks at the initial stages. To overcome this predicament, the regulator employs flexible and iterative rules that evolve alongside the technology. The approach shifts rulemaking from the conventional “regulate and forget” path to a responsive and iterative approach. Adaptive rulemaking is suited for evolving technologies as it relies on a trial-and-error approach and balances innovation with safety. The regulator nurtures a faster feedback loop that allows frequent and continuous revisions to regulatory programs in response to the changing facts and circumstances. The regulator follows the proportionality principle to quickly assess the progress of the technology and restrict intervention appropriate to the level of risk. If a rulemaking approach is ineffective, it allows a seamless switch to another approach during any iteration of rulemaking or the subsequent iteration.

Institutional Reforms

Institutional reforms are structural changes in the rules and norms of authority. The core objective of institutional reform is to create organizations that are more adaptive and flexible. These reforms aim at improving the performance, effectiveness, and accountability of institutions, such as governments, regulatory bodies, and organizations. Institutional reform covers different areas, such as organizational structure, procedures followed, governance, policy, and capacity building. They address inefficiencies, improve decision-making, enhance transparency and accountability, and increase the responsiveness of the regulators.

Institutional reforms include making changes within the regulatory entity to remove organizational walls and redundancies. Rigid procedures make way for more flexible, consultative, and inclusive approaches. The entire rulemaking process shifts into an agile mode that closely tracks the growth trajectory of technology to publish iterative updates. Regulators become more adaptive and capable of creating fluid working teams that collaborate cohesively. To regulate new, complex technologies and functions, new regulating entities are created.

Regulatory Experimentation

Regulatory experimentation is an adaptive approach that allows policymakers to test, refine, and adjust regulations in controlled environments to address emerging technologies or complex issues. It fosters regulatory learning and narrows the gap between innovation and regulation. Regulatory learning enables the regulatory entities to gain better knowledge and understanding of the impacts and risks of new-age technologies. It often uses mechanisms such as regulatory sandboxes, pilot projects, test beds, and living labs. Regulatory experimentation is particularly valuable in fast-evolving sectors like AI/ML, fintech, and biotechnology, where traditional rigid regulations may stifle progress or fail to address uncertainties. Regulatory experimentations could have a broader appeal, as the early experiments conducted for a specific domain, business or industry could benefit the regulators of other industries. The data from an early experiment could provide additional perspective to other regulators and help them customize the objective and scope of their experiments further.

Regulatory Sandbox

A regulatory sandbox is a controlled real-world environment created by regulators for a limited period and scope, where innovators can test new ideas, products, services, or business models under relaxed regulations, with regulatory oversight. Regulatory sandboxes facilitate cooperation and open interaction between regulators and innovators. The existing rules or their enforcement are relaxed or suspended in the sandbox environment. This approach allows companies to experiment with technological innovations without fully complying with existing laws, reducing risks, and fostering development. Regulators monitor the process to assess the potential impacts, refine policies, and identify necessary safeguards. For example, financial regulators often use sandboxes to evaluate emerging fintech solutions while maintaining consumer protection.

Pilot Projects

Regulatory pilots are time-bound, small-scale trials that assess the feasibility and impacts of new regulatory frameworks or policies. Unlike sandboxes, pilots are initiated by regulators rather than innovators and aim to test new regulations before wider implementation. The regulator defines the exact goals, scope, and metrics of the trial. These pilots evaluate the effectiveness of proposed rules in real-world scenarios, gather feedback, and refine approaches based on observed outcomes. For example, governments might pilot new data protection laws in specific sectors or regions. Regulatory pilots reduce the risks of unintended consequences and ensure policies are well-designed and effective before full-scale deployment.

Test Beds

Test beds are controlled experimental environments, typically established for testing and validation of technologies under real-world conditions. Unlike sandboxes or pilots, test beds focus on evaluating the requirements, performance, safety, and usability of emerging technologies rather than regulations. For example, autonomous vehicles may be tested in designated areas to assess functionality and safety without impacting broader road networks. Test beds involve collaboration between researchers, businesses, and regulators to identify technical challenges, regulatory gaps, and societal impacts. They accelerate innovation by providing a risk-mitigated environment to validate technologies before broader deployment or regulatory integration.

Living Labs

Living labs are open, real-life environments where stakeholders collaboratively experiment with and co-create innovations in technology, policies, or services. They emphasize user-centricity and community engagement, involving end users as active participants. For example, urban living labs might test smart city technologies like IoT-enabled traffic systems or renewable energy solutions within communities. Unlike test beds, living labs focus on social, behavioral, and cultural aspects, examining how users interact with innovations in natural settings. This approach helps refine technologies, services, or policies while ensuring alignment with user needs and societal values. Living labs bridge innovation and implementation through inclusive experimentation.

Temporary and Experimental Regulations

Temporary and experimental regulatory approaches are adopted when regulators find it difficult to make informed predictions about technologies and their growth trajectories. These are issued by regulators to address immediate or evolving needs without going through the full formal rulemaking process. Temporary regulations may limit the territorial scope of the regulation, define its duration (with a sunset clause that causes it to expire after a pre-set period), or delay its commencement date (with a sunrise clause based on certain conditions). These provide a mechanism for periodic review, allowing regulators to assess the impact before deciding whether to make the regulations permanent or adjust them based on real-world results. The experimental period helps regulators gather more information on how the temporary rules work, observe how technology is progressing, assess the evolving impact and risks, and make ongoing iterative changes to the regulations. The rulemaking process becomes a continuous learning experience for the regulator. The experimental and temporary rules are followed by more informed, better, and evidence-based regulations. These rulemaking approaches help regulators find the right balance between innovation and regulation, as well as solve the Collingridge dilemma, as they give innovations a chance to thrive while controlling market risk.

Conclusion

For regulators, the process of rulemaking is always an intelligent prediction based on certain assumptions regarding the growth trajectory and industry-driven pathways that a technology is likely to follow. As with any prediction, these assumptions may lead to different outcomes. The most optimal result is when regulations are well-thought-out, appropriately sized, and continue to be relevant as the technology matures and reaches saturation. On the other hand, less optimal outcomes could result in regulations that are undersized, oversized, or fail to evolve with technology. In some cases, technology may change its course, rendering the original regulation redundant. Additionally, the long-term impact of technology could turn out to be riskier than anticipated, potentially exposing the original regulation as inadequate or even absurd. The history of regulating technologies is replete with many such regulatory failures where the pace of technology far outpaced the regulation intending to cover it. With respect to the nature of emerging technologies, the most troubling question for regulators is how to avoid major rulemaking failures that could perpetrate higher risk.

It is important to note that until the sixth Kondratiev wave enters the recession phase, the “problem of plenty” will continue, thereby increasing the pacing problem. Until individual technologies reach maturity and saturation, the “information problem” will persist. Both the pacing problem and the information problem will invariably create oversight gaps. This inevitability shifts the focus from the regulators to the regulated entities. The most important question to consider is how the regulated entities will behave in the grey areas of the oversight gap, where the rules are either nonexistent or unclear. Oversight gaps provide the regulated entities with the freedom to innovate as well as the temptation to exploit. The challenge lies in balancing this freedom with accountability. This grey area is a testing ground for corporate ethics, innovation, and self-regulation, placing immense responsibility on regulated entities to act in ways that uphold public trust. ‌

SRIVATHSAN KARANAI MARGAN works as an insurance domain consultant at Tata Consultancy Services Limited.

References

Alter, K. J., & Raustiala, K. (2018). The Rise of International Regime Complexity. Annual Review of Law and Social Science, 14(1), 329–349.

Anticipatory innovation governance. (2020). OECD Working Papers on Public Governance.

Barry, J. M., & Pollman, E. (2016). Regulatory Entrepreneurship. SSRN Electronic Journal.

Cohen, J. E. (2016). The regulatory state in the information age. Theoretical Inquiries in Law, 17(2).

Cortez, N. (2014). Regulating Disruptive Innovation. SSRN Electronic Journal.

Delmas, M. A., & Burbano, V. C. (2011). The Drivers of Greenwashing. California Management Review, 54(1), 64–87.

Marchant, G. E., Allenby, B. R., & Herkert, J. R. (2011). The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight. Dordrecht Springer Netherlands.

‌Genus, A., & Stirling, A. (2018). Collingridge and the dilemma of control: Towards responsible and accountable innovation. Research Policy, 47(1), 61–69.

Hagemann, R., Huddleston, J., & Thierer, A. D. (2018, February 5). Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future.

Nefiodow, L. (2017). The Sixth Kondratieff. Createspace Independent Publishing Platform.