TL;DR
The "too long; didn't read" summary
- The neoliberal approach to public enterprise has led to a useless maze of disconnected applications which do not serve the public as well as they could, if we understood the importance of rebuilding public capacity instead of outsourcing everything;
- We propose a new paradigm of information technology for governments and other public enterprise that uses open source and democratic feedback to power the next generation of IT public benefits;
- Ensuring public benefits that are sustainable and maintainable requires taking a human approach to security, applications, and cost structuring;
- Information technology, from networking to web design, must be handled fundamentally differently in public enterprises because the incentives must be geared toward providing sustainable and growing public benefits with available resources
- The wider and deeper these principles are adopted, the greater the "force multiplier" of the development process and solutions proposed;
- This revolutionary approach to rebuilding information technology capacity is an easily duplicated campaign plank for any socialist candidate running for office at any level of government;
- The approach is layered into seven discrete changes to IT policy which can be implemented independently, but work best together.
Introduction
We wish to distinguish socialist approaches to information technology (SAEIT) from the neoliberal market-oriented contracting model, which holds sway over government service provision in much of the world today.
Public enterprises have had great successes and failures in information technology; this document outlines how new approaches might unlock new opportunities for technology in the public service. Specifically, we look at how information technology can be deployed to develop equity and equality in delivering a public benefit for any organization operating at public scale.
The SAEIT ("say-it") model views the goal of increased efficiency and continuous development as deepening the value and reach of public benefits. Where private enterprise spends to increase their firms' profits and a return on investment, our model goes beyond the generic case of "non-profit" enterprise.
We propose a new model for information technology work throughout public-facing organizations. We outline a standards based, open source model of development and operations for public enterprises, a roadmap for adoption, practical takeaways, and case studies. Although the focus is on the municipal level, this paper is written in the hope that it will be adopted for broader use.
Using real-world cases and analyses of market trends, we outline a novel, comprehensive approach to the development and maintenance of public information infrastructure and functionality. Specifically, we look at how contractors, vendors, and volunteers are typically deployed to outsource public service provision against a strategy of developing internal resources, i.e., permanent staff and long-lasting infrastructure.
Audience and Scope
This document is intended for public enterprises: those which deliver non-profit, public-facing public benefits at scale.
Although this paper focuses on the case of municipal government, we intend for the value of the SAEIT model to be self-evident for all sorts of organizations and enterprises.
Here we make a distinction between an organization, which would be the body which sets budgets, and enterprises, which lie downstream of budgetary decisions. One example of this distinction would be a municipal government (organization) versus an IT department within that government (enterprise).
Enterprises engage in projects, sometimes at the direction of the larger organization, sometimes to service internal needs. As we discuss below, the current trend of outsourcing projects within public enterprise denies the public the comprehensive benefits of a public-oriented infrastructure, where all layers are oriented toward providing value to the public, as opposed to capturing value from that public.
The scope of this document is limited to public enterprises downstream of general budgeting decisions. Though many may associate the idea of socialist governments with raising taxes, we make no broader assumptions about budgets than those discussed in the Cost and Revenue Structuring section, which looks at how and whether or not to institute point-of-service payments for certain types of goods or services in order to recover costs while maintaining non-profit status.
Technical Scope
This document is technical in nature, but the specifics of the technology discussed are less detailed than a software-oriented whitepaper. For a more complete technical explanation of the SAEIT stack, please visit the parallel "civilian" stack, the open source BeTTY Project, which uses the same technical foundation.
Guiding Principles
Case Study: USPS vs. Jim Crow
Widespread public benefits, though available to all, can have outsized effects on underserved communities.
African-Americans constrained by local markets and customs gained access to a standardized, Federal service conveying goods, banking services, communications, employment, and access to a nationwide market where these were unavailable to all in the Jim Crow South.
A strong public service became a vital lifeline for those suffering systemic inequity and inequality.
We wish to distinguish socialist approaches to information technology (SAEIT) from the neoliberal market-oriented contracting model, which holds sway over government service provision in much of the world today.
Public enterprises have had great successes and failures in information technology; this document outlines how new approaches might unlock new opportunities for technology in the public service. Specifically, we look at how information technology can be deployed to develop equity and equality in delivering a public benefit for any organization operating at public scale.
Defining Public Benefit
When everyone uses a public benefit, they are invested in its maintenance because both public and private interests align. The size of a public benefit can be described as the breadth of distribution multiplied by the depth of its usefulness to each member of the public. Benefits offered to the public, at cost, with minimal restrictions are what SAEIT calls "public scale."
Public benefits must be broad and efficient. In this analysis, private contractor profits are expressed as inefficiencies, to be borne only if relative costs are too much to bear to deliver the broadest possible benefit. Every expenditure by a private firm is ultimately geared toward increasing inequity in their favor; a public enterprise must gear its every expenditure toward increasing equity.
Value-Leaders, Not Loss-Leaders
In evaluating market offerings in the IT sector, we must recognize the influence market forces have on the design, distribution, and revenue models inherent in the resulting products.
Private firms with public-facing free IT offerings add value to their products in order to build brand loyalty on the upside of a financing curve, increasing the allure of the product while keeping that cost at point-of-service as close to zero as possible. The idea is to gain a natural monopoly, or at least capture enough of the market to leverage the user base into a revenue stream; this will inevitably result in either higher costs at point of use and/or some devaluation of the freemium tier. These firms seek monopoly in order to extract value from a captive audience, thereby increasing profits. Public enterprises, specifically those in a monopolistic setting, do not have to engineer their products to attract or trap new customers.
Our approach differs in two ways. First, as a public service provider, we assume a particular monopoly over service provision: in many cases there should not be alternative vendors for government services. Second, since the benefits of monopoly won't be directed toward profit, improvements must be directed toward the users, by increasing the value of these public benefits over time. The focus should be on building robust and useful interfaces atop the deep, built-in functionality of the SAEIT stack.
Leveraging The Public Scale
In the private sector, "synergy" has become a synonym for vertical monopolization. On the other hand, the public sector has natural monopolies which should be leveraged to provide more efficient and effective service to citizens. Often, the focus is on horizontal monopolistic power and the vertical is left to sub-contractors. For short-term projects, this makes sense: making a project permanent means staffing one or more permanent positions and an attendant bureaucracy.
Taking a longer-term approach to building a lasting and useful public infrastructure means examining the potential for vertical monopolization. By developing a flexible internal capacity to take on successive projects, we can preserve institutional knowledge and resources, instead of always resorting to short-term outsourcing. For SAEIT, any organization operating in the public interest must rise to the same task.
Delivery is both about provisioning and maintenance. Provisioning should aim toward equity. In phased deployments or "rollouts," start where needs are most acute. Maintenance, on the other hand, is focused on creating and preserving equality: rollouts are complete when all can use the service and it runs robustly.
Standard-First Development
A monopsony (as opposed to a monopoly) is when a single consumer has market-defining powers. In a larger market-based economy, public-scale organizations' best tool for leveraging the scale of public enterprise is to use monopsony power to set costs and standards when dealing with vendors and contractors. At base, monopsony power is about leveraging your own internal market to extract value from suppliers.
Individual applications and their features may come and go, but the infrastructure and the data which travels along it are more permanent. This way, whether a function is ultimately outsourced, underserved, or otherwise outside of the current capacity of your organization, standards may be published to allow future development to align with enterprise goals and existing systems.
The International Organization for Standardization (ISO) is the world standard for industry expert agreement on practical technical matters. It is global, non-partisan, and used by governments and industry alike. SAEIT adheres to ISO standards wherever possible. Where ISO standards differ from local custom, enterprises should add a translation layer in user interfaces rather than storing non-standardized source data. Within the industry, this is known as localization, and it is an important part of the interface strategy described below. Where ISO standards are insufficient, consider adapting the most relevant standard, using it as a model for the creation of a new one.
Applying the Wisdom of DevOps
Any discussion of IT infrastructure at scale must do so in terms of DevOps, a revolutionary paradigm which has redefined the industry. This approach, which is a portmanteau of development and operations, proposed that the traditional method of information technology management had segregated the functions of organizations into "silos" where the development of applications was unnecessarily divided from the operation of maintaining those applications for system operators (i.e., system administrators).
Central to the effectiveness of DevOps' holistic, integrated development process is its focus on metrics ("observability"). The shift to DevOps across the industry was a rethinking of the whole development process to integrate operator and user feedback, and a whole-organizational view of IT infrastructure.
However, much of DevOps has been diverted into the commercial cloud space. The invention of cloud computing was as revolutionary for enterprise IT as DevOps was before it. Cloud computing represents a major reduction in bars to market entry. Startups could now instantly outsource all of their IT needs to the cloud and focus on the application layer of the TCP/IP stack. The emergence of the Big Three cloud platforms (Amazon, Google, Microsoft) as dominant players in the industry space has cemented the market-defining power of these firms, and customers have become wary of vendor lock-in.
Democratizing Information Technology
Reduce Overhead with Democratic Automation
As part of an overall commitment to workplace democratization, this document proposes a diffused-power model of development. Version control provides a means for democracy, if used correctly. In this model, managers (top-down administrators within the organization) direct resources downstream, while developers vote on code changes to send upstream. Similarly, when collecting feedback from users, a democratic process should be used to assign bug fixes and feature requests based on a democratic process (see User Groups and Community Participation in Layer 5).
Balancing Civil Liberties and CyberSecurity
Governments are both producers and consumers of open source intelligence (OSINT), and are charged with drawing the boundaries between data that is publicly available, and that which is safeguarded from public release. The personally identifiable information (PII) your organization possesses is subject to a robust debate over civil liberties in how such information should be safeguarded, distributed, and/or anonymized.
Public databases have broad potential for misuse, but nevertheless form the basis for an informed citizenry. Information security theory dictates that the level of restrictions applied to sensitive data be determined by the potential damage done upon its release to the public. From this, we extrapolate a "need-to-know" basis for the release of information, and our definition of the public benefit must be weighed against the individual civil liberties which may be threatened by total transparency.
Here we distinguish between two types of data: public communications and privileged information. The first category encompasses announcements, alerts, websites, calendars, dataset releases, and any information mandated for public release by legal requirement. The second includes legal and court documents, agency internal documents, real-time signals which may include PII, and any information about an individual's personal accounts with your organization. (For a discussion of methods we suggest to protect privileged information, see Availability and Retention in Layer 3.)
The SAEIT Seven-Layer Stack
Adapting the OSI Model

Figure 1: The OSI and TCP/IP stacks, side-by-side
The ISO's model for information technology infrastructure is known as "the Open Systems Interconnection model" or the OSI 7-layer stack, and it is the foundation of today's global Internet infrastructure. By separating the work of constructing a global communications network into discrete layers, this global volunteer organization managed to create the Internet, using new, interlinking technologies at every level of the stack, built by both public and private workers.
ARPANet and its successor, the Internet, gave each layer an "end-to-end" guarantee that the lower layers would function transparently and independently. This allows each discrete layer to worry about its own guarantees while accepting those of other layers as given.
The Internet itself is built on the TCP/IP stack, which is a consolidation of the OSI model. The Physical and Data Link layers were united into "Network Access," while the Presentation and Session layers were combined with "Application." SAEIT similarly redistributes the functions of these layers within the stack.
Layer Independence
Key to the success of the OSI model was allowing each layer to be constructed independently, without having to coordinate its decisions with other layers. One reason for this is the end-to-end guarantee required for each layer, which means higher layers simply assume lower layers are standard-compliant.
Following this model, the SAEIT stack is designed to be implemented piecemeal, as transition costs are non-trivial. Each layer can similarly be constructed and implemented independently of the others, though the whole stack works best when all layers are unified in purpose and execution.
The Layers
Layer 1: Sovereign Cloud
Control over your infrastructure through ownership of physical machines and data centers.
Layer 2: Secure Datagrams
Unified data format with built-in encryption and routing information.
Layer 3: Unified Data Lake
Centralized repository accessible to all applications with proper credentials.
Layer 4: Routing & Retention
Layer 5: Open Source
Layer 6: Sovereign OS
Custom Linux distributions for diverse organizational needs.
Layer 7: Emergent Interfaces
Layer 1: Sovereign Cloud
Taking Control of the Means of Cloud Production
At the base of the SAEIT stack are the physical machines upon which all of this software runs: the cloud. Owning the cloud means control over how efficiencies are distributed throughout the system. Commercial cloud providers optimize their offerings for profit; a SAEIT cloud must be optimized for value. Within public organizations, there are possibilities for synergies and consolidations which can make use of unused intra-organizational capacity. In fact, this is how the cloud began: Amazon's data centers were built with lots of extra capacity, which they realized could be rented out to other firms.
A new "sovereign cloud" paradigm has emerged due to regulatory and other concerns. Initially spurred on by the legal requirements of GDPR, public and private enterprises have been moving to isolate cloud operations within a single legal jurisdiction, with heightened privacy and data control guarantees. There is often an explicit legal requirement that data remain physically bounded within one juristiction, and that the cloud provider not have any access to unencrypted government or otherwise sensitive information.
Case Study: Crowdstrike Outage
In 2024, a single vendor whose cybersecurity software had become a standard across Microsoft's Azure cloud pushed a faulty update. Soon, Windows machines all over the globe fell into unrecoverable loops, causing widespread outages and chaos.
An estimated $10 billion in damages occurred. By integrating a third-party into the application flows and the operating system itself, Windows users were left flat-footed when their external dependency on Crowdstrike was exposed as a weak link.
The Big Three commercial cloud providers (Alphabet, Amazon, Microsoft) also offer sovereign services, using routing to confine information to a section of their global cloud resources. But operating a data center of one's own has greater benefits than legal compliance.
Only through the development of true self-sufficiency can an organization hope to fully control the costs of infrastructure provision. However, the cost of providing infrastructure for a single project may be prohibitive. Building sustainable, permanent infrastructure means being able to extend its use beyond the scope of a single project. Renting infrastructure from a private firm should be a short-term measure, limited in scope.
The sovereign cloud should be secured via traditional methods at the machine level. This is the responsibility of system administrators and should accounted for as part of cybersecurity Operating Expenses (OpEx). Marshalling underutilized or recycled equipment is no issue for a cloud capable of intelligently routing capacity (see Layer 4: Routing and Retention). Undercapitalized organizations can begin building their own cloud reusing old machines; intensive capital investment is not always necessary for the SAEIT stack.
Responsible Use of Artificial Intelligence
As of Q2 2025, CapEx investments in artificial intelligence (AI) constituted a larger percentage of GDP than consumer spending, a unique event in U.S. economic history. However, a recent study has shown that 95% of generative AI pilots at companies are failing.
AI services resold by the big four in Silicon Valley (Alphabet, Amazon, Microsoft, Meta) are currently loss-leaders, effectively hiding the rising and unchecked energy and computing resource usage demanded by this AI bubble. Having integrated AI into every search, Alphabet and Microsoft in particular are leading the charge against energy efficiency by deploying AI processes unbidden, and at public scale.
While there are some neural-network algorithms for data analysis which have shown promise, here we examine the use of generative large language models (LLMs) in public-facing applications as a proxy for cloud AI services. SAEIT takes the view that these AI investments do not scale well for several reasons.
By design, each AI process is locked in a black-box neural network, meaning the process which generates the output is, in some senses, opaque to both developers and operators. This sets incentives against economization, because each process is an unobservable random walk.
There are also few ways to meaningfully limit usage of public-scale AI-powered services. If we set up the expectation that you can ask an AI chatbot anything, users will end up torturing it for anything they can think to ask, even if the LLM is programmed not to entertain user fancy for too long. Each new session is resource-intensive to open and maintain.
The energy impact of relying on AI can be effectively hidden by vendors' loss-leader strategies (see Value Leaders, Not Loss-Leaders in Guiding Principles). Organizations who own a sovereign cloud and must manage their own energy bills will quickly realize today's models simply aren't up to the marketing promise. Energy costs will be paid by consumers in higher market prices overall, but also in upcoming cloud service price increases to recapture value.
Layer 2: Secure Datagrams
Abolishing the Stateful Paradigm
Within the TCP/IP stack, all applications are assumed to be stateful, which is why the top three layers of the OSI stack are compressed into a single "Application" layer in the TCP/IP stack. An application's state is all of the information of which it must keep track within a session.
Think of an arcade video game. You start the session with a coin, and the machine starts tracking lives, items in your inventory, your score, where your opponents are, etc., and updates this information 25-30 times a second, sending frames to the monitor with a partial representation of your entire game session's state.
By contrast, stateless connections do not assume any history or external context to send data between two parties, which is why these are always one-time-only, one-way interactions.
Stateful connections are less secure due to "man-in-the-middle" attacks, where the session gets spied on, bypassing security.
SAEIT bridges the gap between stateful and stateless by requiring all state information be embedded into the datagram itself, including the security context and mechanism.
It's as if all your game session information were baked into every frame. This means sessions are replayable, and no active, two-way connections need be maintained.
Unified Datagram Format
In the OSI model, the Data Link layer translates binary ones and zeros into dataframes, which are then translated into packets in the network layer. Packets are data files consisting of a header (meta information about the data) and a payload (the data itself).
In the SAEIT model, the datagram is a packet whose header contains routing information and an encrypted document as the payload. In this way, each datagram can be said to be a representation of the state of a document at a given time, whose encryption is a vector itself (see Auth-via-Encryption below).
We propose breaking down internal siloes (in DevOps-speak) by unifying data with a common format, so that it can be handled by applications without specific knowledge of their contents. A common datagram format is the basis for unifying the data space in the subsequent layer.
The header of the datagram contains unencrypted information used to locate and potentially index the payload. This means header information is exposed over the network.
SAEIT datagrams must contain all routing information, along with their encrypted payload. This ensures datagrams are secured in transit, and opaque to any layers below 7.
Keyed Encryption Based Access Control (KEBAC)
Packet-level encryption provides both authentication and authorization without heavy security infrastructure. Asymmetric, or key-pair encryption is a natural fit for this, as no secrets need be shared between sender and recipient before the first communication, as with symmetric encryption schemes.
With asymmetric encryption, users retain a private key which is never shared. A public keyserver allows anyone to send encrypted information to any listed recipient which can only be decrypted with the private key.
With this information embedded into each datagram, there is no external session authorization necessary; encryption guarantees that only the intended recipient (authorization) can access the material by decrypting it (authentication).
Supplemental material: Read a preliminary draft of the KEBAC RFC.
Flexible schemas
Before flexible schema databases were introduced, database coordination meant getting each row to have the same columns, and linking various tables together. But nested document standards like JSON allow records in the same "collection" to have completely different keys and structures, as long as each record has a unique identifier (See Content-Based Addressing in Layer 3).
In order to avoid the issues of format lock-in and to promote a usable and adaptable standard, we recommend building the datagram on JSON-LD, with compression and encryption at the packet level. However, with encryption comes a loss of searchability and potential introduction of incompatible data.
Allowing a flexible and expansive set of standards for the datagaram header will allow the functionality to evolve and diverge within reason and compatible scope. This is made possible by the JSON's ability to use an entire subdocument embedded into any JSON document as the value in a key-value pair.
Layer 3: Unified Data Lake
SQL vs. NoSQL
The traditional "relational" database model, first proposed in 1970, was the standard for com-puter databases for a generation. Relational databases are like linked spreadsheets: tables of predefined columns, with each row constituting a record. Each table may be linked using "foreign keys," meaning fields whose values refer to unique columns in another table. Below is a typical relational model for linked "Articles" and "Users" tables within a single database:
Each table's first column is a "unique key," a value which can be used to identify a single row. In this example, "author_id" is linked to the "Users" table, so when a user contributes a document, the information in the Users table can be used to do authorization, authentication, and include the author's bio when presenting the document in the Application layer.
Relational databases tend to use the Structured Query Language (SQL) as a standard interface, so they are often referred to as "SQL databases" as well. In 1998, a new paradigm called "NoSQL" arose, and the JavaScript Object Notation Standard (JSON), invented in 1997, was eventually adapted to NoSQL "document" databases like MongoDB. In such databases, we might see the above rendered as a list of nested key-value pairs into a single, unitary JSON document:
{
"article_id": 123456,
"author": {
"name": "User Name",
"bio": "Short biography"
},
"body": "Article Text"
}
Out of Many Pipelines, One Lake
In typical public enterprise, each application is externally developed with its own stack responsible for authentication and authorization, opening up a stateful connection to a cloud server. This often means setting up a parallel data pipeline to other siloed applications within the organization. SAEIT obviates this model with a centralized data repository common to all enterprises.
Central to the SAEIT stack is a reconception of the linear data pipeline into a non-linear data lake. This is a broad application of the DevOps idea of breaking down internal siloes; we extend this concept to all data within the organizational space.
Because the SAEIT datagram is time-sortable and encrypted both in-flight and at-rest, there is no need to restrict pipeline access to a single application or process. This means that instead of setting up a maze of unidirectional message streams (e.g., Amazon's SQS) between processes, all applications with the appropriate key (see KEBAC in Layer 2) access the data asynchronously, without need for locks or latencies.
An open field of retrievable datagrams could be hosted via IPFS (see below), which provides a more sophisticated method of managing and scaling immutable bits of data than setting up individual streams between application endpoints.
Lakehouses
Many strategies for handling large sets of disaparate data have arisen in the age of machine learning and data ingestion and processing at scale. Early on in the era of "Big Data" was the data warehouse, a large central repository for an organization's data used in Extract, Transform, Load (ETL) pipelines.
For large-scale storage of undifferentiated raw data, the data lake was invented in 2011. A hybrid approach is called a data lakehouse: a lake where all documents are schema-conformant.
Flexible hierarchies
Traditional databases (from SQL to NoSQL) typically organize their information in an addressable hierarchy to make things easy to find. Rows exist in a single table in SQL, corresponding to documents in a single NoSQL collection.
However, we can build a better system into the datagram itself, as we did with Auth-via-Encryption: instead of siloing data into top-down structures, we can embed tags into the header data. This means that documents can exist in multiple "collections" at once (called a many-to-many association).
Content-Based Addressing
There are many proprietary and open source products for building, maintaining, and searching data lakes. SAEIT recommends at the network level (that is, inter-enterprise or inter-organization) to use something like the Interplanetary File System (IPFS) as the basis for the unitary lake. IPFS is capable of robust and redundant file storage, content-based addressing, and maintains searchability and retention policy with pinning and advertising services built into the protocol.
Content-based addressing uses a one-way hash of the datagram's content to give it a unique identifier on the network (i.e., within the Unified Data Lake). This means different edits of the same document occupy different addresses, and redundant content is immediately identified (personalized documents and communications are just different enough to produce a completely different hash value; with a strong enough cryptographic hashing standard, a file's contents are not guessable from the hashed address).
Local-first
In response to the pervasiveness of the cloud and the SaaS model, the local-first movement within IT advocates relocating user applications to run on local devices with their own copies of data, as opposed to keeping sessions and stateful connections to cloud machines open during use.
Local-first is a commitment to trusting local environments to handle end-user data responsibly after the document is decrypted by the intended recipient (see Human-focused security in Layer 7). When data is returned to the centralized system in our model, it must be re-encrypted to be stored at-rest securely.
Not all applications can be designed or converted to a local-first model. However, at higher layers we can assume that machines enacting Layer 3's end-to-end guarantees are handling their own, divergent copies of data, which will be returned to central repositories over the same pipeline. This asynchronous access to the Unified Data Lake avoids the need for true stateful connections.
Local-first systems have a conflict resolution method called CRDT, which sets algorithmic rules for how divergent copies of the data are reconciled in version control. There are many CRDT methodologies, and these algorithms can be customized for your enterprise's needs.
In the end-to-end guarantees for the routing layer, then, we assume machines on both ends are local-first, even if the user interface implemented at Layer 7 is not. With the necessary routing information already embedded in the packets at Layer 2, along with KEBAC (which ensures payloads are opaque to all stack layers between 3 and 5), session-related security measures become unnecessary.
Layer 4: Routing and Retention
Implementing this layer is a good potential first step in implementing the SAEIT stack.
Taking control over directory services
The ability to scale depends on this layer, whether through carefully planned expansions of services or autoscaling a virtual server cluster. The power of software-defined networking (SDN), such as that which is implemented at this layer, is what allows the cloud the flexibility to rise to public-scale service. By using reverse-proxies, SDN was the key technology that let many virtual machines appear as a single endpoint for end-users, making cloud scaling "just work" for firms looking to outsource system administration entirely.
Controlling domain name services (DNS), public-key infrastructure (PKI) and reverse-proxy load balancing (all of which we will collectively refer to as "directory services") are crucial for managed transitions to the SAEIT stack. Not only will this allow vendor independence from comprehensive cloud offerings like Amazon's Route 53, but configuring your own SDN layer will allow organizations to lower the cost of transitioning to the sovereign cloud.
Minimizing transition costs
We cannot pretend that every organization can immediately turn around and implement all, or indeed any of the recommendations in this document. The costs of moving from an existing system are the determining factor in whether or not an enterprise can make use of these principles at all. The separation of the stack is designed to allow piecemeal replacement of public infrastructure in a sustainable manner. The manner of execution for these enterprises is just as important as their ultimate goals. A badly-deployed service can do more harm than good.
We hope to help organizations realize the long-term value of self-sufficiency as a counterweight to the urgency of short-term savings. The entire model is not necessarily meant to be instantaneously and universally deployed within existing organizations; the costs of retraining end users and system operators on new software is non-trivial.
A properly configured proxy layer allows reverse-engineering and transitioning away from proprietary software whose APIs are public. When you can keep a new resource at an old address, you immediately reduce transition costs.
Availability and Retention
Undifferentiated and poorly managed data lakes can become wastelands, taking up storage resources with useless data, so the lake must be "swept" regularly. But there is more to the maintenance of the data lake than resource policing; the retention of data has implications for civil liberties and public safety.
Information security (InfoSec) theory uses classification as a means to separate and grade risks and security measures for different types of documents based on two broad principles:
- "Need-to-know" disclosure, meaning that access to a document must be operationally justified;
- Classification based on the potential damage done by public disclosure of the document.
It's easy to see how these two principles guide organizational secrecy, but for public enterprises, they must be balanced against questions of civil liberties. A public-facing enterprise doesn't only collect PII, but releases it as well. The release of such public datasets may inadvertently violate the privacy of people who become identifiable through the data. To combat this, various methods are used to redact or otherwise protect the information. Data retention and anonymization measures available to maintain privacy in public data include:
- Delay: Simply waiting to release information which only has public safety implications if released in real-time.
- Archive: Move data out of easy access to an off-network location to be retrieved only if needed. With encryption, data can be retained in the same lake, but with a different encryption vector.
- Redact: Remove some data from the set before public release.
- Expire: Simply remove old data regularly by date, setting a lake-wide expiration date for all files.
- Sweep: Remove data based on any non-time dimension.
- Compress: Consolidate the data based on one or more non-identifying dimensions, retaining some representation or summary of the data without redacting.
These measures can, of course, be applied across the stack when looking at any data release, whether public or intra-organizational. Security demands mean operational details must be obscured from the public, but this does not mean there is no responsibility toward transparency, or to release useful information at some point, after applying these methods listed above.
Layer 5: Open Source Everywhere
Careful Use of Metrics
One issue with the contracting model of production is that it allows individual firms to focus on fulfilling contract terms rather than holistically considering the value of the public services and benefits their work is meant to deliver. These terms are often defined in contracts as deliverables and key performance metrics (KPIs).
Even the Soviet Union suffered because of the use of the contract model of in internal production management. Western software developers have a legend of the "Soviet Shoe Factory Problem."
Production quotas in the USSR were nationally issued to individual factories. In the Western version, when quotas were increased (i.e., the factory was asked for more shoes) the lines were switched to making children's shoes (requiring less of the already limited raw materials per pair). This resulted in a surplus of childrens' shoes and a dearth of adult sizes.
In the Russian version of the story, production quotas had been issued from Moscow using a single metric: kilograms. So the factory in this story simply began making heavier adult shoes to fulfill the quotas while decreasing output and making worse products.
In both versions, insistence on clear scopes and KPIs is the moral of the story. Metrics must be relevant and responsive to emergent ways to game the system.
Insource, Open Source, Outsource
Not all infrastructure or service needs can be met internally within the scope of a particular enterprise. Public enterprises, like all others, must deal with practically limited resources and timelines for project delivery. They have turned to outsourcing (paying private firms to fulfill civic duties) since time immemorial to fulfill gaps in capacity to deliver public good and services.
Many public-scale enterprises cannot reasonably hope to avoid contracting, at least in the short- and medium-terms. So we must develop principles to guide how such private firms would be contracted in such a way that preserves both public value in the product, as well as providing equitable pay and maintaining a pool of potential contractors available for stop-gaps as needed.
Sustainable Open Source
The SAEIT model looks at the long-term implications of open source based enterprises as service provision, rather than application maintenance. For this reason, we focus on the employees and users rather than contractors when thinking about resource distribution. New projects should be budgeted for largely around the OpEx expected for support, since development costs will hopefully decrease significantly over time.
Where contracts are awarded, the metrics by which performance is judged must be considered against renewals. We must ask: have sufficient operating capacities been generated and captured "in-house" to replace the capacities of this contract going forward?
In evaluating a contract, we must consider the profit of the contracting firm to be expressed as a measure of inefficiency in the provision of that good or service, compared to insourcing (using and developing existing resources).
Defending Open Source Commitments
Occasionally, an open source project wholly supported by a single vendor will implement license changes to recapture value from open source licensees. This is a threat to the sovereignty of the SAEIT stack, since it may change the cost calculations implicit in using (erstwhile) open source software in a SaaS model.
Fortunately, the open source volunteer community is fairly resilient, and there will typically be a true open sourced alternative to products whose licenses have been compromised for a profit incentive.
The Shoddy Rich of Fifth Avenue
In the early days of the Civil War, each state's army had to equip and ship out massive forces in short order. New York State had no internal capacity to sew its companies' new uniforms, so they turned to the famous Brooks Brothers firm, whose business had been built on selling slave uniforms back to the antebellum South, made from their own raw cotton exports.
Contracts in the 1860's were often awarded based on open bribery. When it came to delivering on this contract, the quality of Brooks Brothers army uniforms were found wanting:
"Brooks Brothers glued together shredded, often decaying rags, pressed them into a semblance of cloth, and sewed the pieces into uniforms. Far from protecting the soldiers from inclement weather, these uniforms would fall apart in the first rain. The New York State Legislature eventually spent $45,000 — about $10.8 million in current dollars — to replace the uniforms. The company stonewalled; when asked why he did not lower his prices for using lesser materials, one of the proprietors, Elisha Brooks, responded, ‘I think that I cannot ascertain the difference without spending more time than I can now devote to that purpose.'" (New York Times 5/9/11)
The shredded-and-glued makeshift cloth was referred to in those days as "shoddy" and typically used for interior coat linings, not the whole of the garments themselves. The Brooks, and other war profiteers who built mansions on Manhattan's Fifth Avenue became known as "shoddy millionaires" by the public.
Every Expenditure Is an Investment
The prevailing paradigm of public enterprise involves a heavy reliance on private firms for everything from infrastructure to development to operations. For public enterprise, infrastructure is an intrinsic part of service delivery and must be part of service design as well.
When categorizing operating expenses devoted to contractors with their own infrastructure, we must consider these lost opportunities for public enterprises to leverage ownership of said infrastructure for other public benefits.
Separating any part of what could be public infrastructure into private hands must be evaluated on the basis of costs and benefits to both the public interest and those of the private contractor. Private investments in information technology infrastructure have historically been disastrous in the long-term. While this may allow a municipal buyer of last resort to pick up buried Internet cabling on the cheap, relying on the failure of well-financed private firms is not a sustainable long-term strategy either.
Engagement Across Organizations
In DevOps the value of breaking down internal silos in development and delivery has been demonstrated in both public and private sector IT. Open source development has shown that there is a non-profit, non-commercial model to build a true public benefit. Deploying open source software developed by other SAEIT organizations allows frictionless cost-sharing across public enterprises.
The potential for diffusing development and support costs to the community—while allowing new enterprises to lower development costs everywhere—can only happen where there is a strong and engaged user base. However, for organizations relying on open source products for production (i.e., staking all of their operations on that software working well), sponsored development has played a crucial role. The more organizations adopt SAEIT, the cheaper these shared development costs get, as these organizations contribute code back to the open source project.
User Groups and Community Participation
Free software has a tradition of "User Groups" where peers offer support, implementation tips, talks and a way for professionals to connect over regular, in-person meetings of fellow users of a particular piece of software. Linux User Groups (LUGs) often came to encompass open source software users in general, since Linux is the most popular open source OS.
The User Group has potential for a loosely structured but supportive community to expand the use and value of those SAEIT products which are distributed directly to the public. Users may become developers as well, but even non-technical, live feedback is an important part of a responsive development ethos.
It is important to note that community engagement must be viewed as a supplement to, but not a substitute for developing internal capacity. SAEIT views the open source community as a "force multiplier" for the broader effectiveness of the stack.
Layer 6: Sovereign Operating Software
Redistributing GNU/Linux
The most popular and successful example of open source technology is the Linux kernel, which powers a vast and branching array of operating systems based on what is now an extremely well-funded volunteer project.
However, the Linux kernel is just one part of the larger GNU/Linux ecosystem. Linux distributions are comprehensive operating systems which include the Linux kernel as well as install scripts, utilities, graphical interfaces, package managers, and all sorts of other software bundled into a coherent offering.
As of 2025, distrowatch.org lists hundreds of Linux operating systems, all largely interoperable and capable of running the same software (sometimes with a bit of work on the part of system operators).
Distributions often “fork” from others, when one or more developers within the community decide they need to take the project in a completely different direction. This is an expected part of open source; indeed, all of today’s Linux distributions can be traced from four early distributions created between 1992-2002.
Many Linux distributions are commercially backed. Although open source licensing is meant to prevent the possibility of vendor lock-in, there have been instances where the commercial sponsor of development will decide that strictly adhering to the principles of free software isn't profitable any more. Fortunately, there are also non-commercial projects which are community maintained by volunteers.
Sovereign Operating Systems
Open source operating systems such as Linux and FreeBSD have major advantages over proprietary platforms, such Microsoft Windows or MacOS, because of access to the source code and the ability to contribute improvements. Controlling your own repositories allows sharing, localization, and ultimate control over resources, extending the benefit of access to and control over code.
China's OpenKylin and India's BOSS are examples of sovereign Linux distributions: free operating systems which are developed downstream from major open source projects (principally Debian GNU/Linux). These OSes are interoperable with the Linux open source ecosystem, but are specifically maintained, supported and standardized by public enterprises.
We propose a SAEIT Linux distribution (or "distro"), composed of open source software curated and/or developed for the needs of the public enterprise stack, and customized for a variety of public and internal organizational needs.
Controlling costs
Open source software is free to use and modify within the restrictions set forth in a license chosen by the developer. This means the costs for such software are exclusively in the OpEx budget, namely, added support work for system administrators, as well as training costs for staff, which are always required for any software transition.
For many private firms, training is a sunk cost, often offered as a deal sweetener in large enterprise contracts. While publicly available educational materials about their products may help raise product awareness, the more intensive labor of retraining an enterprise worth of employees and supporting them through an IT transition is a cost they would prefer to minimize. For enterprise IT products, a powerful enough company will create paid certification exams for individual engineers, essentially constructing a private charge for entry into their proprietary product support's labor market.
Within the security contexts described above, public distribution, training, and even certification need not constitute cost-based barriers to entry for a potential SAEIT-knowledgeable labor pool. The lower the barrier to entry for (and access to) these services, the better.
Potential SAEIT OS Versions
A SAEIT Linux distribution can standardize user interfaces and security measures beyond the application space. Licensing fees for proprietary systems are inherently limiting, giving away capacity to the private market for the temporary administrative relief of outsourcing. However, releasing a free operating system which is also used and supported internally within your organization provides an ancillary benefit to the public, and opens a potential field of workers already familiar with the OS.
Apple gained a strong foothold and unwavering brand loyalty by capturing the education market early, and specializing in creative arts technology. But Apple's proprietary hardware is far more expensive than generic equipment which can run Linux or other low-overhead open source OSes. One great advantage to Linux and similarly customizable open source OS families is the ability to run on diverse hardware. The Linux kernel has been "ported" to most CPU architectures, from desktops to mobile phones to embedded systems.
The BOSS Linux project offers a variety of versions within its offerings for various uses; server, desktop, and specialized operating systems all branching from a unified repository. Our hope is that the SAEIT stack, including server and desktop operating systems, will encourage a community of users across enterprises and organizations. SAEIT should adopt a similar strategy to that of BOSS, with several purpose-customized versions (sometimes called "spins") built out of a centrally maintained package repository:
SERVER – Based off of a well-established, non-commercial Linux distro |
DESKTOP – Designed for enterprise staff, but freely available to all |
SET-TOP – Designed for kiosks, billboards, home TV sets, etc. |
CONTAINER – A slimmed-down SERVER OS for Kubernetes, etc. |
EDUCATIONAL – A "spin" of DESKTOP for students and teachers |
MOBILE – A "spin" of the SET-TOP distro designed for touch-screen use |
Layer 7: Emergent Interfaces
Interfaces provide a window to documents. They may provide a visualization or summarization, editing, mapping, new document creation or messaging. Human interfaces to the SAEIT stack should deliver value to the public by increasing the usability of your organization's projects and resources and decreasing or outright eliminating barriers to entry.
With authentication, authorization, state representation, and data formatting already handled at lower layers of the stack, interfaces should have a much quicker development time than full-stack applications in a typical public contracting setting. Unlike the TCP/IP "application" layer, which consolidates presentation and session into applications because the assumption is a two-way, client-server relationship, SAEIT's top layer is simpler, as it only assumes one-way, sessionless connections.
Shallow Emergence From a Deep Stack
Emergence is the phenomenon of an unexpected pattern arising from the interaction of two or more known patterns. In IT, we talk about new technology that shifts paradigms as "emergent." This can be good or bad; we speak of attackers exploiting previously unknown IT vulnerabilities ("vulns") as "emergent threats."
In the shallowness of the SAEIT stack's top layer lies the potential for great breadth. Fostering and directing beneficial emergence is a major benefit to open source development.
The "shallow" requirements for interfaces should be based on fully implementing Layers 2-4:
- Comprehension and adherence to the Unified Datagram Format (see Layer 2);
- Read (and/or write) access to the Unified Data Lake (see Layer 3);
- Decryption and encryption capabilities in-app, using the PKI services of Layer 4;
Unbothered by any other context requirements, SAEIT interfaces fulfilling those above can focus purely on front-end development, crafting user experiences ("UX") to help people interact with these complicated technical systems on a human level.
Human-Focused Security
Within both public and private enterprise, there has been an increased trend toward outsourcing and economizing on security expenditures by relying to technological solutions and outsourced Security-as-a-Service firms. The Achilles heel of such product-focused approaches is typically the human factor: a leaked password which compromises an otherwise sophisticated system, a careless contractor who leaves their laptop in a cab, the Friday afternoon release of code from sleep-deprived programmers on deadline. The best locks can't help if you forget the keys.
Security Models for Humans
SAEIT views security as a primarily human concern, rather than one which should be solely left up to the technical arms race. In keeping with the principle of increasing internal capacity, we advocate spending considerable resources on staff (and potentially public) security training, certification, and recertification.
For users, the security model is very similar to the world of paper documents. Physical possession of a paper document (the equivalent of a decrypted datagram) grants the holder certain rights and responsibilities, enforced only by security culture: they have the ability to copy the document, destroy it, release it to the public, or deface it. The only thing preventing unwanted abuse of document access are the human-focused security measures implemented by your organization.
Within the Unified Data Lake, only forward-facing changes are made, so documents which are still live on the network cannot be destroyed or defaced. But the local-first guarantee means SAEIT assumes the technical ability for users to copy and potentially redistribute anything they can decrypt, the same way screenshots are often used to get around technological impediments within apps that don't want users to export data.
Red Team/Blue Team Auditing
Aside from production flaws like security bugs and supply-chain attacks (which should be handled by cybersecurity staff as part of the DevOps development cycle), as an organization, training your staff is crucial to maintaining public safety. As part of that human focus, SAEIT prefers ongoing, randomly scheduled, active penetration testing to buying vendor promises about the capacity to outsource cybersecurity infrastructure.
In cybersecurity, we suggest the red team/blue team method, where half of the cybersecurity team is assigned the "red" role of attacker (the aforementioned active penetration testers) and the other half assigned the "blue" role, defending the system from the red team. These live exercises should involve non-technical staff to ensure your organization trains and defends against "social engineering" attacks, which are non-technical ways to compromise IT systems through forging or otherwise compromising human access.
Even a small red team can help maintain security standards through the organization, but it is the blue team which actively keeps your organization's IT assets and infrastructure safe.
Scaling Support for Public Benefits
As with privately developed products, if a SAEIT project is offered to the public, there will be some type of user support needed. Here we distinguish between a matrix of various options for support, sorted by the intensity of resource expenditure into staffed ("active") and unstaffed ("passive") approaches:
| Active | Passive |
|---|---|
| Triage — Active staff attention on-call for emergency repairs of production systems before following up with post-mortem analyses. | Internal User Groups — A feedback loop for inward facing projects and non-public aspects of enterprise. These are self-organized, as opposed to fora. |
| Help Desk — traditional, ticket and queue based support by dedicated staff during regular hours. | Community Fora — Publicly hosted or otherwise fostered user groups for publicly released software. |
| Internal Forum — A community forum for staff or a subset of the public, for peer support of internal or otherwise gated products used and developed locally | Online Documentation — Always available, usually helpful for users trying to solve their own tech needs. |
Building Valuable User Experiences
The UX field is concerned with designing human-usable interfaces and products, and as such, is the focus of development in this layer. Traditionally, usability is the focus, but in private firms, UX engineers have the added responsibilities of maintaining branding and marketing goals, as well as any other profit-related goals which might otherwise limit usability to capture value for their firm.
SAEIT user experiences are about lowering barriers to access for public information, and maintaining local-first security for sensitive material. Principally, SAEIT interfaces let users work with one or more documents, copying them from the Unified Data Lake, modifying them if needed, and reuploading any changes back to the Lake. As such, interface designers can focus on extending functionality and customizing interfaces for different types of users in different contexts.
Poke-a-yoke
Poke-a-yoke is a design concept invented by Toyota which comes from the term for "foolproof" in Japanese. Multi-step procedures are designed so that humans can only move on to the next step by interacting with the system in the expected manner. An example of poke-a-yokei> design would be a washing machine which won't start the cycle until the door is securely locked. In SAEIT, we suggest using poke-a-yoke to ensure the right thing to do is always the easiest thing to do in any user interface.
Reactive design
Modern websites tend to use a single "reactive" codebase for desktop and mobile versions, using CSS to automatically adjust to your device. Using a single HTML/CSS "web gateway" will allow client-server type applications to deliver the guarantees of local-first while maintaining the traditional TCP/IP structure, and it also allows for easy cross-platform and cross-device development.
Cost and Revenue Structuring
Payment at Point-of-Service
Socialist approaches to public benefits usually preclude means-testing. This means the preference is toward provision of services at a zero point-of-service cost, backed by a broader effort to tax revenue. To promote equity, such free services might be limited per-user if resources are particularly scarce. The goal of infrastructure should be toward increasingly lowered costs, thanks to greater economies of scale.
On the other hand, there may be services whose cost or purpose do not fit the free at-point-of-cost model. If the provision is unusually costly, or the benefit being provided occupies a particular niche, there may be a need to collect payments. Needless to say, this category cannot include emergency services or anything life-threatening.
In the case of non-free services, costs should hew as closely as possible to the portion of the per-user costs above the average for all services (both free and non-free).
Metered Usage
Additionally, the provision of business services to private firms should be considered for such non-free cost structures. Individual departments (and indeed entire branches) of government often lack the means to raise funds on their own. Licensing fees have long provided income streams for governments. SAEIT views any one-time or annual costs charged at point-of-service to operate much like a license: fixed and published in structure, and priced to fund positive regulation rather than a bar to market entry or punitive tax.
Freemium vs First-Served-Last-Served
In the private market, we often see firms on the upside of a financing curve offer valuable IT services to the public for free, subsidized by private investors looking to recapture that value after cornering the market. As a means of distributing costs, this works for the public if, and only if, a steady stream of startups are willing to sacrifice themselves on the altar of public service provision. In terms of extending the principle of metered usage for limited resources, such goods and services offered to the public at cost should be provided in such a way that ensures equity and limits abuse.
For such resources, a "first-served-last-served" method of queue management is simple enough to work broadly: those who are served first go to the back of the queue for subsequent distributions. This method works well enough for any type of queue for recipients to be shared between enterprises if they share resources, allowing more equitable distribution across the board.
Full-Stack Proposals
The following are two proposals which are inspired by the spirit of SAEIT: creating value with civic engagement and open source.
Cooperative Bidding
As an extension of the idea of open source development inverting the outsourcing model, we propose a method of soliciting outside contractors and developers to bid on design and fulfillment in an open, integrated fashion which opens up economic opportunity, instead of enclosing it in a closed bid process. A cooperative bidding process for a SAEIT contract could use the iterative development approach to open source. We outline a rough procedure below:
- A clear scope for the project is crucial. Wherever it fits in your organization's implementation of the SAEIT stack, the requirements must be clearly set out before the process can begin. Additionally, phases of development must be clearly outlined (as necessitated by steps 3 and 4).
- Register the participants. Although project scopes may be publicly advertised, the bidding procedure need not be entirely public, and limiting proceedings to serious participants only should help reduce friction. Individuals as well as groups may register to cooperate, levelling the playing field with established contracting firms.
- Hackathon-style rounds for code contributions from participants develop the project in real-time, with each round having a further limited scope in the development process. The project will need a dedicated version control system capable of branching, merging, and forking.
- Participants can freely associate with others ad-hoc or in teams, as they open and contribute to branches during the coding rounds. The creator of the branch is the de facto administrator of that effort, and may allow others to contribute or merge branches. Forking must always be allowed from any branch, and in keeping with the ethos of open source, these forks may become separate software projects from the winning branch and may even move out-of-scope if desired.
- The soliciting enterprise calls an end to the round and vets the extant branches for scope compliance. Vetted branches pass into the next round, and steps 3-5 repeat until the soliciting enterprise declares they are satisfied with one branch.
- The contributors to the winning branch are recognized as an ad hoc contractor (if they are not already an existing firm) and awarded a contract to complete the work and/or support the project by terms set forth in the original scope.
Free Citywide Wi-Fi
New York City currently operates the largest municipal Internet subsidy program in the world, Big Apple Connect, which provides over 330,000 residents with low-cost internet access (who may qualify for additional state and Federal accessibility subsidies). Additionally, the city operates a network of Link NYC kiosks which provide free Wifi (entirely financed by advertising revenue), and a Link 5G program that set up a network of communications towers for which the city charges cellular providers to park their equipment.
However, these programs are not integrated, and the history of Big Apple Connect reveals how moving from a more ambitious SAEIT-oriented enterprise begun by the previous administration was abandoned for a contracting model which lessened the value of the public benefit for all.
The Internet Master Plan
New York City embarked on an ambitious Internet Master Plan in January of 2020 under Mayor Bill de Blasio:
[A] bold, far-reaching vision for broadband infrastructure and service in New York City. It frames the challenges of achieving universal connectivity, clearly states the City's goals for the next generation of internet service, and outlines the actions the City will take to help all service providers contribute to those goals. It is both comprehensive in its view of the city and tailored to each neighborhood's unique conditions. The Master Plan presents public and private actors with the opportunity to address major, persistent gaps in infra-structure; deliver higher-performing connectivity for residents and businesses; and set a course for eliminating the digital divide in New York City.

Figure 2: The NYC Mesh network overlaid with NYCHA housing in violet
The goal was nothing less than universal broadband throughout the five boroughs by 2025, and the plan had advanced using a combination of networking technologies and utilizing "assets that are owned, operated, or otherwise controlled by the City, or available for City use." Private operators of communications infrastructure would "be able to respond with requests for assets from multiple City agencies. The City will prioritize approaches that enable multiple operators to share in the use of an asset."
Five stated principles guided the enterprise: equity, performance, affordability, privacy and choice. Internet Service Providers ("ISPs") were selected from both traditional cable providers and non-commercial mesh networks.
Mesh Networking
Mesh networks operate in a very different way than traditional cable providers or wired ISPs. using line-of-sight or other wireless means, and increase bandwidth with every installation, rather than decreasing it. Similarly, adding more mesh users increases system reliability because each node forwards traffic to other nodes, rather than being a one-way recipient of data at one end of a traditional network.

Figure 3: Free Public WiFi via LinkNYC and 5G
Big Apple Connect
When Mayor Eric Adams assumed office in January of 2022, the IMP was halted and in its place was proposed the "Big Apple Connect" program, which limited offerings to the two largest ISPs in the city and service to 220 of 335 New York City Housing Authority developments. Cable bills are reduced by up to 50% in this contract, which has been extended for another multiyear term. In removing mesh providers from the program, Big Apple Connect killed a major "force multiplier" in the rollout of citywide municipal broadband.
It was subsequently discovered that the program was being used to enable live video surveillance of NYCHA residents without their knowledge or permission. At the same time, Adams announced LibertyLink, connecting 2,200 Section 8 apartments with a new mesh network as part of a program which took $2.5M from HPD and the Public Libraries to deliver on what should have been in place had the IMP been carried out as designed.
Insourcing Wider and Deeper Broadband
First, all existing programs should be unified into a single municipal Internet network. All networked IT assets, and indeed all ISP contracts across the city, could be unified in a monopsonistic negotiation with ISPs operating at a higher level than the direct-to-consumer firms from which city enterprises purchase access.
Returning to the IMP's strategy of using city assets means opening up possibilities for co-locating sovereign cloud machines across existing networked properties. Unlocking this synergy can have many benefits:
- Keeping power and space rentals cheap;
- Distributing physical capacity, energy use, and redundancy across the city;
- Securing access and equipment at point-of-service, which keeps the whole area safer;
- Building excess capacity for future projects
Strategic Rollouts to Increase Equity and Effect Equal Access
By boosting the citywide network with mesh networks, we ensure a sustainable, expanding and deepening benefit. The peculiarities of mesh networks provide unique opportunities for expanding the system organically. Line-of-sight networking (which can be affected by inclement weather, and is not as fast over a single link as wired connections) uses a parabolic radio wave over the air, meaning that if you can see a mesh networking unit, you can connect to it, and multi-point connections make the network robust.
Manhattan and parts of the Bronx, Queens and Brooklyn are well-covered by LinkNYC, 5G networks, and access to line-of-sight units in the dense and tall buildings which populate these areas. In outer boroughs, not only do large swaths of low-lying housing have sightlines to Manhattan skyscrapers, but nearer-by NYCHA developments, which tend to tower over neighboring housing.
Reversing the loss of public value with SAEIT
How can we apply the principles of SAEIT to rescue the promise of the Internet Master Plan from the missteps of Big Apple Connect and LibertyLink?
The IMP was well on its way toward delivering the kind of deep, sustainable public benefit SAEIT wants to construct. Cheap or free Internet access is a value-leader, which can help foster the further adoption of SAEIT products, user communities, economic opportunities and more.
The Big Apple Connect program used some measure of monopsonistic power to wring a 50% discount from cable providers, who made a permanent infrastructure investment by wiring NYCHA projects into their network. However, Federal regulations state that alternative ISPs must be able to use this infrastructure after BAC contracts expire in 2028, which provides a natural target date for full implementation and deployment of a Universal Citywide Wifi enterprise.
By ensuring the project scopes and any software interfaces are open source and public, we can hope to prevent betrayals of public trust such as BAC's surreptitious surveillance, which damaged trust in all city-provided services, both within and without NYCHA.
Extending the IMP's strategy of using city assets, we suggest co-locating distributed data center machines across such properties, e.g., libraries, NYCHA houses, and other buildings already wired with high-speed connections. Instead of using a public benefit as a Trojan horse, co-locating small-scale data centers in NYCHA housing could invert the current approach. Instead of passive surveillance, the city could increase safety by devoting security resources to protecting the data center and keeping the Internet connection up, which would have the ancillary benefit of establishing a security presence focused on protecting city assets rather than spying on residents.
The "Neighborhood Tech Help" centers which are part of LibertyLink, along with hundreds of other technology training centers, could be reoriented along the SAEIT stack: distributing free open source software, installing SAEIT OSes on recycled or donated hardware, helping residents form hyperlocal user groups for peer support and education, and deepening the value of their offerings by using free software.
Big Apple Connect's cable services are extremely basic, so there is a real opportunity to gear the release of a SAEIT SET-TOP distro towards BAC users, allowing them to access Internet-based media streams as well as over-the-air digital signals like those of NYC.tv/WNYE, the municipal television network.
Integrating all network resources across the network, connecting mesh, cellular and WiFi equipment into a single citywide network, will make true citywide delivery of broadband and wireless a reality.
We suggest a split model for costs at point-of-service. WiFi for individuals should be free, but lightly throttled to ensure network capacity. Permanent connections (e.g., BAC style installations) should be offered at cost, and existing state and federal subsidies for Internet access should be automatically applied for those eligible.
Conclusions
Since the neoliberal turn of the 1980's, public services have increasingly been outsourced to the private market. This careful and deliberate project of emptying out state capacity has had far-reaching effects, not only in the deprivation of the quality of those services, but in the understanding of government capacity itself. A paradigmatic shift toward capable public institutions needs a careful and deliberate approach as well.
The radical transparency of open sourcing all government IT projects is just such a paradigmatic shift. By rewriting the rules by which capacity is defined, a single emergent pattern can help protect civil liberties, lower costs, increase civic participation in a way that gives individuals marketable skills, and provide freely available, lasting public benefits whose value only deepens over time, as opposed to eroding.
A democratic development approach not only gives the public a direct and powerful voice in shaping these public services, but provides real value for participants at point-of-service.
The seven layers of the SAEIT stack comprise big changes in how public enterprises think about the actual costs of using, transmitting and protecting their information. None of these suggestions can happen overnight, and layer independence allows a piecemeal and sustainable pathway to transition. Every proposal meets unforseen practical challenges. The emergence of common difficulties in applying SAEIT should be channeled back into an intra-organizational open source community, so that solutions are shared among all.
An engaged open source developer and user community allows your organization to deploy SAEIT products locally while developing and maintaining them at an intra-organizational scale. This not only inverts the outsourcing model, but the bureaucratic one as well. A civic user group should be a direct democracy, able to address public needs and concerns while allowing civic-minded discussion and contribution to directly offset bureaucratic time and money that would otherwise be spent on a more hierarchical contracting process.
Where DevOps implored individual organizations to break down internal siloes, SAEIT asks public organizations everywhere to break down external siloes as well. Public enterprise anywhere should be united in a common purpose: to supply and increase public benefits with available resources as efficiently as possible.
A commitment to public benefit means building a sustainable path towards maintaining, deepening, and broadening the provision of services to the public. We cannot hope to do this by renting infrastructure and capacity from private hands. Instead, we must increase opportunity by keeping infrastructure itself a public benefit, and locked into the needs of the public rather than any vendor.
Rebuilding public enterprises from the ground up isn't easy, but with the SAEIT stack, it will get easier and easier the more widely the approach is adopted. A network of SAEIT public organizations can multiply meagre civic budgets by the power of community and inter-organization engagement, while continuously delivering and improving public benefits. But first, we must begin with a single layer.
Glossary
- CapEx
- Business speak for "capital expenditure," or one-time investments.
- Conflict-free Replicated Data Type (CRDT)
- A data structure which allows disparate edits of a document to be automatically resolved with merging and version control.
- Enterprise
- Any endeavor requiring effort, resources, and planning; within the scope of this paper, a sub-organizational unit deploying projects.
- GDPR
- Consumer privacy laws in the European Union which hold companies to a high standard for PII protection.
- Hackathon
- A short interval in development where focus is pulled from day-to-day projects to focus on low-hanging fruit type improvements.
- Information Technology (IT)
- Equipment or systems which handle data, from hardware to software, cloud services to networking cables.
- Key Pair
- A set of private and public keys associated with an email address. Anything encrypted with the public key can only be decrypted with the corresponding private key.
- Key-Value Pair
- A NoSQL data structure where a unique key is prepended to an arbitrary value. Every JSON document, for example, is made of a list of key-value pairs.
- Kubernetes (K8s)
- An open source cloud orchestration system that allows container-based applications to scale up and down automatically.
- Local-first
- When applications are designed to run on the end-user's machine, potentially disconnected from the Internet.
- NoSQL Database
- A database made up of documents or columns, which may hold nested key-value pairs in an unfixed schema, as opposed to a SQL or relational database.
- Open Source
- A software licensing movement which requires all source code be freely available, and allows anyone to modify the code and release it under the terms of the open source License.
- OpEx
- Business speak for "operating expenses," or ongoing necessary costs.
- Personally Identifiable Information (PII)
- Data which can be used to identify an individual, allowing the public release of private, sensitive information.
- Relational Database (SQL)
- A long-used method of storing information in predefined columns and rows, with links between tables based on shared keys. These databases typically use the Structured Query Language (SQL).
- Software-as-a-Service (SaaS)
- A cloud deployment model where end-user software runs in a client server fashion over the Internet, as opposed to local-first.
- Vendor Lock-in
- When a firm leverages their monopolistic power to make transitioning away from their products more difficult.
Links
- BeTTY
- For an in-depth technical discussion of a SAEIT-style data lake project, see the open source BeTTY protocol. BeTTY is a local-first, encrypted semantic network which fulfills the requirements of Layers 2 and 3 of the SAEIT stack.
- Memeograph
- A proposed flagship social networking app for BeTTY, Memeograph is an example of the kind of open source interface which can be built atop a SAEIT-style stack, capable of handling both sensitive and public data without needing to use Single Sign-On authentication.
- In Government
- Why Denmark is dumping Microsoft Office and Windows for LibreOffice and Linux (ZDNet)
- Yet another European government is ditching Microsoft for Linux - here's why (ZDNet)
- BOSS Linux
- BOSS Linux Official Site
- C-DAC Product Page
- OpenKylin
- Kylin OS (Wikipedia)
- Meanings and terminology
- What is DevOps? Meaning, methodology and guide (TechTarget)
- The Healthcare.gov Rollout
- Case Study | Why Healthcare.gov Failed: Lessons in Project Management (Yale)
- How Healthcare.gov’s botched rollout led to a digital services revolution in government (Federal News Network)
- Announcement
- NYC Internet Master Plan (De Blasio administration)
- NYC Kills Internet Master Plan for universal, public web access (Gothamist)
- Adams Quietly Uses Free Internet at NYCHA to Expand Police Surveillance (New York Focus)
SERVER – Based off of a well-established, non-commercial Linux distro
DESKTOP – Designed for
SET-TOP – Designed for kiosks, billboards, home TV sets, etc.
CONTAINER – A slimmed-down SERVER OS for
EDUCATIONAL – A "spin" of DESKTOP for students and teachers
MOBILE – A "spin" of the SET-TOP distro designed for touch-screen use