In 2020, I submitted an affidavit. I withheld key names, redirected conversations during meetings with people, and embedded indicators in plain sight, disguised within the ordinary and overlooked by design. Every omission was intentional—not to obscure the truth but to protect it.

I observed those working against our country—and those working for it—didn’t fully comprehend what they were inside of. Many were speculating blindly. Some were merely useful innocents. Others were compromised beyond repair. But I could see the architecture of infiltration, the foreign entanglements woven into domestic institutions, the cryptographic handshakes crossing oceans without notice. What stunned me wasn’t how deep it went—it was how few even wanted to see it. Not one mention of FHE. No one person invoked Rivest as an expert. Not one mention.

I remember how quickly those waving the “election fraud” banner proved clueless or compromised. They didn’t know how it worked—how the systems talked to each other, how trust was mathematically manufactured, how voting systems were only the surface layer of a much deeper infrastructure war. I asked myself: Are they pretending? Or are they really this blind? Either way, their outrage was a performance, their understanding shallow. My concern was not knowing who the CCP had reached in influence. Those who still pretend that there is a conflict with China and Taiwan, and that Taiwan isn’t part of the CCP empire? A manufactured conflict is for distraction and profit. Or, those who believe the FHE is in the hands of the CCP and that the US lost the tech race and work for the CCP under the guise of working for the United States of America?

Interestingly, the question I was asked most frequently? “What do you want?”

As if I were angling for a position, a paycheck, a seat at the table. Can’t someone want to rectify the wrong? I went there on my own accord, with no sponsors or interest backing me other than to serve my country.

When I sensed someone was compromised—federal, military, private, it didn’t matter—I began tainting the well intentionally. I would mislead with partial statements or contradict myself outright. It was a defense mechanism. The reason was not about me, but for what I was protecting: the uncorrupted body of information, the pattern I was constructing through fragmentation. When I saw TJ Schaffer parade a truck driver in front of cameras, laying claim to something that indeed happened, but without using readily available evidence (DOT MONITORS TRUCKER MOVEMENTS- literally strapped to the engine), I realized they are either incompetent or in on it. The ballot stuffing, the private warehouses the USPS rented in OHIO to park PA and MI ballots, was an operation by the Agency. (Well played, CIA and Co.) At that point, my goal was not to convince anyone in the moment—it was to leave behind a map, invisible to most, but unmistakable to those who could piece it together. My affidavit has some of those pieces, all 100% verifiable and accurate.

In war, deception is everything. Both sides were lying and manipulating. The stage was set for spectacle, and the public got a circus with grandstanding attempts, talking circuits pushing canvassing, and obfuscating the ACTUAL remedy -TRUTH.

As the smoke rose, I watched people I once called friends and allies turn into predators and profiteers, pushing for access, angling for information, and trying to monetize what they suspected I had. When I refused to play along and made it clear I wasn’t here to fund someone’s political campaign, turn tricks for coin, or boost a grift, they turned—fast. The moment I stopped being useful, I became a threat.

But that’s fine. That’s the cost of holding the line.

Enter Stage Left: The Harvard Undergraduate Mathematics Association (HUMA) is a student-run club that hosts math competitions, social events, and talks for undergraduates who love numbers. But beneath this innocuous exterior lies a more profound reality: HUMA functions as a strategic pipeline, channeling some of the most intellectually gifted individuals in the United States into the heart of the national security apparatus.

HUMA draws in Harvard’s most talented and ambitious mathematical minds each year. These are not simply students solving problem sets—they are future architects of complex systems that will define the next era of geopolitical power. Many HUMA members go on to pursue graduate studies at institutions like MIT, Stanford, and Princeton, often under the auspices of Department of Defense-backed research grants and fellowships. Their work frequently intersects with topics of national significance: quantum cryptography, advanced algorithmic modeling, zero-knowledge proofs, artificial intelligence, cybernetics, and game-theoretic security frameworks.

Others are recruited into elite defense and intelligence organizations, including the National Security Agency (NSA), Defense Advanced Research Projects Agency (DARPA), MITRE Corporation, and In-Q-Tel, the CIA’s venture capital arm. These institutions seek precisely the raw, disciplined, and mathematically inclined thinkers that HUMA cultivates—individuals capable of understanding and solving abstract problems with real-world consequences.

Beyond the public sector, HUMA alumni often find themselves embedded within private defense contractors such as Palantir Technologies, Raytheon, and Booz Allen Hamilton, or within powerful policy and security think tanks like the RAND Corporation, the Center for Strategic and International Studies (CSIS), and the Council on Foreign Relations (CFR). In these roles, they help design and critique systems that manage global surveillance, cybersecurity policy, strategic deterrence modeling, and information warfare.

In today’s security landscape, national defense is no longer dominated by conventional arms alone. It’s shaped by those who can calculate, simulate, encrypt, and optimize. The modern battlefield is digital, predictive, and algorithmic—and mathematics is its lingua franca. Fields such as quantum computing, artificial intelligence, and decision science are now central to military dominance, cyber resilience, and safeguarding critical infrastructure. These domains are abstract, mathematical, and often opaque to the public—but not to the HUMA-trained minds who will one day operate within them.

You should follow HUMA – it’s not merely a college math club. Following HUMA will allow you to observe the early formation of future cryptographers, cyber warriors, intelligence analysts, and strategic theorists. These individuals will shape encryption protocols, influence AI governance frameworks, and develop the probabilistic models used to anticipate terrorist activity or assess the stability of nuclear deterrence.

As someone skilled enough to participate in HUMA-sponsored events, I realized that HUMA is not the power; it’s the signal that precedes the power. It is where future decision-makers quietly gather, far from the halls of Congress or the headlines of defense publications. Many ignore its significance, which is missing a key node in the broader system of influence, recruitment, and intellectual preparation that ultimately helps steer the direction of American national security. Simpsons anyone?

This is precisely why tracking intellectual ecosystems like HUMA is not merely academic but geopolitical. The minds incubated within such organizations don’t remain isolated in problem sets or chalkboard theorems; they are groomed for insertion into high-leverage roles across intelligence agencies, defense contractors, and emerging tech policy structures. And in that same breath, we must ask: where else are such minds being shaped, and to what end? If HUMA represents a quiet prelude to American power, then Tsinghua University is its mirror opposite—deliberately calibrated to serve China’s rise as a strategic and technological superpower.

The contrast is not just institutional; it is architectural. Whereas HUMA operates in a liberal academic environment that often underestimates its geopolitical significance, Tsinghua functions as a fully integrated extension of the Chinese Communist Party’s defense doctrine. One is the unaware signal; the other is the conscious instrument. Together, they form opposing poles of a transnational knowledge war—a subtle but escalating battle over who controls the future of information, automation, and ultimately, command.

Tsinghua University is not simply a top-tier Chinese research institution. It is the crown jewel of the CCP’s military-industrial complex (MIC)—the intellectual engine behind China’s strategic dominance in cyber warfare, artificial intelligence, surveillance infrastructure, and dual-use technologies. Its academic reputation in the West belies its deeper function: to serve as a soft-power conduit for the Chinese Communist Party’s national security and military ambitions under the umbrella of so-called “civilian” research.

For decades, elite American universities—Harvard included—have maintained exchange programs, joint research initiatives, and academic partnerships with Tsinghua. These programs often focus on sensitive and strategically consequential domains, including:

  • Quantum information science
  • Cryptographic algorithm development
  • Artificial intelligence and deep learning architectures
  • Blockchain infrastructure and zero-knowledge proof systems

These aren’t harmless academic curiosities. They are the building blocks of next-generation command-and-control systems, autonomous weapons platforms, data-driven population surveillance, and cyber-kinetic warfare capabilities.

When students affiliated with groups like the Harvard Undergraduate Mathematics Association (HUMA) participate in research partnerships, exchange programs, or co-authored academic publications with institutions like Tsinghua University, they are not just engaging in benign scholarly collaboration. They are—wittingly or not—participating in the construction of a high-value intellectual corridor between America’s elite mathematical and scientific talent and China’s most militarized academic entity, one that directly supports the strategic goals of the Chinese Communist Party (CCP).

This corridor is not hypothetical. It is real, and it is already active. Tore Maras

For example, the case of Tsinghua-Berkeley Shenzhen Institute (TBSI)—a joint academic venture launched in 2014 between the University of California, Berkeley, and Tsinghua University. While marketed as a platform for global academic innovation, internal U.S. government reports flagged the institute as a dual-use technology conduit through which Chinese military contractors could co-opt advanced research in robotics, precision manufacturing, and quantum communication. In 2020, following pressure from the U.S. Department of Education and scrutiny from the Department of Defense, UC Berkeley began to scale back its role in the venture.

Similarly, Harvard’s collaborations with Tsinghua in AI and cryptography have raised quiet but growing concerns in the intelligence community. In 2018, researchers at both institutions co-authored a paper on federated learning, a machine learning architecture essential for privacy-preserving AI applications. What went unnoticed by the general public, but not by those watching HUMA closely, was that Tsinghua’s Department of Precision Instruments and Mechanology—a known PLA-affiliated entity—was a contributing institution. In PLA doctrine, federated learning is a core component of military-intelligence fusion in cyber command systems.

Even informal exchanges matter. A 2019 investigation by the Australian Strategic Policy Institute (ASPI) documented that over 600 Chinese scientists affiliated with military institutions had passed through Western academic institutions under the guise of civilian research. These “visiting scholars” often collaborated on projects related to computational fluid dynamics, AI target recognition, and advanced materials science, then returned to China with technical expertise that was rapidly folded into PLA R&D programs. The institutional gray space between student clubs like HUMA and formal research labs is where such dual-use intellectual transfer can occur unnoticed.

What makes this leakage so insidious is its subtlety and legitimacy. No espionage is necessary. No classified systems need to be breached. A talented American undergraduate, eager to publish or participate in a prestigious exchange, may unknowingly contribute a novel cryptographic protocol, optimization technique, or network modeling framework—only to have that innovation embedded in the PLA’s next-generation battlefield systems or national surveillance grid.

The battlefield of the future begins in classrooms and coding labs, not launchpads. And collaborations that begin with good intentions often end with strategic asymmetries. Tore Maras

Academic culture in the West is rooted in openness, dialogue, and the free exchange of ideas. But when those ideas flow into authoritarian systems engineered for absolute control, they do not remain neutral. They become weaponized. The intellectual freedom of the West becomes the strategic asset of the East.

Student groups like HUMA must be seen as local communities of scholarly interest and potential inflection points in the broader competition for cognitive dominance. Monitoring these interactions, especially when they cross into adversarial research networks, is no longer an option. It is a national imperative.

When speaking of election theft, it is obvious the CIA was involved. I worked on the Afghanistan elections as a PMC (Global Security Group), and USAID funded most of it. From campaigns being funded, votes being bought, and the alteration of results. Ukraine was the most TRANSPARENT election theft in modern history, with Executive Orders by Obama, and was fully supported by the Department of State via USAID. John Owen Brennan, who is NOT that smart, even blamed a Russian “hack” for stopping the counting of the votes in the middle of the night in Ukraine, and that narrative died days after its deployment as a failure, memory-holed in the abyss of the internet.

The C_A and China have stolen your elections for decades. And when I say stole yours, I speak to the WHOLE world.

To identify the trustworthy source of a systemic problem, particularly one obscured by deliberate deception, scale, and complexity, you must locate the critical nodes that persist when conventional understanding mechanisms fail. While useful under normal conditions, symptom analysis, data pattern recognition, and feedback loops break down in environments shaped by war fog and adversaries’ strategic obfuscation. This is called getting rabbit-holed. These environments are not accidental—they are engineered to render truth indistinguishable from noise.

When nothing seems to correlate, signals contradict, and narratives fracture, you must search for not answers, but pressure points—the recurring nodes through which influence, control, or disruption consistently flow. These nodes rarely scream for attention. They operate in quiet continuity, surviving through changing symptoms and shifting data. They are not evident unless you observe across time, not for events but for patterns of concealment.

In these moments, intelligence shifts from data-driven to intuition-fortified pattern recognition, guided not by what is said but by what is consistently omitted, distorted, or redundantly explained. The system’s failure to correct itself becomes the clue. And where there is no correction, there is a node protecting itself—a protected center of gravity masking the origin of the anomaly.

How did no one mention RIVEST… on the “election fraud” conservative side or the left? Were they all pretending to be experts and grifting? Were they working for the CCP? Were they working for the Gatekeepers?

One critical node in that pattern is Professor Ronald L. Rivest. Celebrated co-creator of the RSA encryption algorithm and longtime champion of secure voting, Rivest’s public contributions are framed in patriotic terms: increase election integrity, minimize fraud, and safeguard voter trust. He was a foundational figure in the Caltech/MIT Voting Technology Project, which emerged after the 2000 presidential election fiasco. Under the well-intentioned banner of improving American voting systems, it was here that the Trojan Horse entered the gates.

Allow me to elaborate. Tsinghua University has engaged in research related to voting cryptography, particularly focusing on zero-knowledge proofs (ZKPs) and homomorphic encryption, both foundational technologies for secure electronic voting systems. Researchers at Tsinghua have developed systems like UniZK and PipeZK, which are designed to accelerate zero-knowledge proof computations. These systems apply to various domains, including electronic voting, where they can enhance privacy and verifiability. For instance, UniZK utilizes protocols such as Plonky2 and Starky, which are recognized for their efficiency in applications like blockchains and electronic voting. Voting Systems Companies refuse to show their black box mixing algorithms, now you know why.

That is precisely the conceptual territory Dominion Voting Systems, among others, is referencing when explaining the “mixing phase” within their end-to-end verifiable election systems. While the companies may not always use homomorphic encryption explicitly in public-facing documentation, the underlying cryptographic mechanisms, particularly those tied to ballot shuffling, vote anonymization, and secure aggregation, are fundamentally based on principles drawn from homomorphic encryption and mixnets. These technologies are designed to ensure that the contents of each vote remain confidential while still allowing the overall tally to be verified and audited.

In practice, the mixing phase involves taking encrypted votes and passing them through a cryptographic process that reorders and re-randomizes them without decrypting the underlying data. A series of servers or trustees typically perform this operation, each performing a shuffle and re-encryption, ensuring that no single party can access the original encrypted ballot and its final decrypted result. This methodology is intended to preserve the secrecy of the vote while allowing public verification of the correct operation of the system.

Like other vendors operating in jurisdictions that demand high transparency standards, Dominion relies on the credibility of cryptographic techniques to justify claims of election integrity. These operations often occur parallel with other cryptographic constructs, such as zero-knowledge proofs or digital signatures, to further assert that the shuffle was performed correctly without introducing fraud.

Through its extensive research into homomorphic encryption and decentralized voting protocols, Tsinghua University demonstrates a clear institutional interest in these same methodologies. While specific implementation details may not continuously be published or available in Western channels, Tsinghua’s research track record—including Ethereum-based voting systems and zero-knowledge-based tallying schemes—indicates that it operates with a deep understanding of these cryptographic primitives. These are the very tools that underpin what Dominion and others describe as secure, anonymized vote processing.

When Dominion Voting, among others, references their “mixing phase,” it is gesturing toward a class of cryptographic operations that Tsinghua understands and has actively published research on. This convergence of theoretical and applied cryptography is not coincidental—it is a globally shared voting security architecture, vulnerable not in its math, but in its cross-border intellectual entanglements. And that, ultimately, is the vector worth watching.

Though here is where it gets fascinating. Ronald L. Rivest co-developed Scantegrity, a voting system marketed as a breakthrough in end-to-end verifiability—a technical term designed to evoke trust. Built on optical scan ballots, Scantegrity introduced cryptographic confirmation codes, invisible ink, and audit capabilities, allowing voters to confirm that their selections were included in the final tally without revealing the contents of their votes. Its implementation in Takoma Park, Maryland, was hailed as a success, and the system remains a showcase of how mathematics can reinforce the democratic process.

But mathematics can just as easily be weaponized.

Optical scan ballots are often perceived as more secure than touchscreen voting due to their physical audit trail. However, this perceived integrity is only as strong as the chain of custody, the scanning software, and the post-processing cryptographic systems that interpret those ballots. When votes are fed into scanners, they are converted into machine-readable data that can be altered, substituted, or reinterpreted through the software layer. These alterations need not be visible to the voter or the poll worker. A compromised scanner, a manipulated database, or a skewed interpretation algorithm can all serve as invisible choke points where election outcomes are quietly redirected.

Scantegrity claimed to solve this with cryptographic auditability. Yet, it opened a new front: mathematics as both a validator and an obfuscator. The voter receives a confirmation code, yes—but that code must be verified through a public bulletin board, a complex cryptographic chain, and software trusted to run honest verification algorithms. It is here—in this cryptographic black box—where manipulation can be both concealed and defended under the guise of mathematical legitimacy.

And this is where Rivest’s quiet, sustained engagement with Tsinghua University must be followed.

Research in Voting Systems –https://archive.is/efOxj

Tsinghua has been advancing cryptographic techniques nearly identical to those used in Scantegrity: zero-knowledge proofs, mixnets, homomorphic encryption, and distributed verifiability models. In particular, their research into self-tallying voting protocols on Ethereum replicates, in architecture if not in branding, the same intellectual structures that underpinned Scantegrity. The presence of federated learning models—another area of Tsinghua’s interest—in secure voting environments hints at a more enormous ambition: machine-driven, decentralized control over voting infrastructure, shielded from human oversight by cryptographic complexity.

The overlap is not theoretical. In 2018, Tsinghua’s researchers proposed a privacy-preserving e-voting system that combined the same zero-knowledge constructs Rivest helped popularize. The system employed bulletin board-style public verification—again, a structural echo of Scantegrity. Their language mimicked Rivest’s papers. Their protocols mirrored his innovations. And the environment they serve is one in which elections are not democratic exercises, but calibrated mechanisms of state control.

If Rivest brought cryptographic accountability to Takoma Park, Tsinghua brought cryptographic opacity to regime preservation. The unsettling reality is that both institutions use the same math; their political architectures differ.

This convergence cannot be dismissed as an academic coincidence. It represents a transnational diffusion of voting infrastructure logic—one that begins in Western laboratories and ends embedded in systems that do not believe in free elections, verifiability, or the people.

So yes, optical scan systems can be manipulated. But the actual manipulation is deeper. It’s in the algorithmic trust we outsource to cryptographers, and in the quiet, shared lineage of code and protocol that connects Takoma Park to Beijing—through the hands of Rivest, through the halls of Tsinghua, and into the encrypted machinery of modern election control.

This convergence of cryptographic control and institutional authority did not happen by accident—it was architected. The same hands that embedded trust into obscure verification codes in Takoma Park were simultaneously shaping the federal standards that would define the architecture of American elections for decades to come. Rivest was not operating in isolation; he was positioned at both ends of the pipeline—as the cryptographer who built the mathematical scaffolding of “secure voting” and as the policymaker who embedded those assumptions into national certification requirements. This dual role ensured that cryptographic trust became the default language of election legitimacy. And once that language was adopted at the federal level, it spread quietly and globally into research institutions like Tsinghua, where it was mirrored, absorbed, and redeployed under radically different ideological aims.

From 2004 to 2009, Ronald L. Rivest served on the Technical Guidelines Development Committee (TGDC), a little-known but extraordinarily influential advisory body to the U.S. Election Assistance Commission (EAC). This was not merely a consulting role but a strategic placement at the nerve center of election systems modernization, precisely when America’s voting infrastructure was being redefined in the post-HAVA (Help America Vote Act) era.

Rivest chaired the Computer Security and Transparency Subcommittee, placing him at the crossroads of cryptographic policy, software architecture, and federal certification standards. Under his leadership, the committee pushed forward a new orthodoxy: software independence—the principle that election results must not rely on any one piece of software to be trusted. On the surface, it was a noble and technically sound objective. But in execution, it seeded a complex, cryptographic dependency model that made elections increasingly opaque to laypersons and dangerously reliant on a narrow class of technologists.

During Rivest’s tenure, the TGDC spearheaded the drafting of the Voluntary Voting System Guidelines (VVSG) 1.0, which began to encode cryptographic mechanisms as core components of trust architecture in electronic voting systems. Simultaneously, the committee laid the groundwork for VVSG 2.0, which would go even further by emphasizing end-to-end verifiability, modular software designs, and non-proprietary interfaces—all of which sound democratic, but in practice shift power away from state election officials toward federally aligned cryptographic gatekeepers.

This was not reform. This was recalibration—a subtle, technical reshaping of the American voting system’s control structure. Rivest headed the subcommittee and codified its assumptions.

It’s also no coincidence that during this period, Rivest was simultaneously expanding his academic footprint into international cryptographic circles, including engagements with institutions like Tsinghua University, whose research mirrored and sometimes replicated the cryptographic assumptions embedded into U.S. voting protocols. Whether intentionally or not, the same man shaping America’s election software standards also influenced the academic environments through which adversarial states would build their versions of “trustworthy” voting systems, for very different ends.

The TGDC was not a legislative body, but its influence was—and remains—legally consequential. Vendors seeking federal certification were effectively required to comply with the framework it produced. In other words, it dictated the terms of access to the American ballot box.

And those terms were written in Rivest’s language.

In 2006, Ronald L. Rivest introduced the ThreeBallot voting system—a deceptively simple, paper-based protocol designed to simulate the security properties of cryptographic voting without relying on digital infrastructure. At first glance, it appeared to be a return to analog integrity: voters would fill out three separate ballots in a specific pattern, casting two and retaining one as a receipt. This system allowed voters to confirm that their votes were counted without revealing how they voted, retaining both verifiability and secrecy. It was marketed as a bridge between the paper trail demanded by transparency advocates and the mathematical assurances required by modern cryptographers.

But context is everything.

The ThreeBallot system emerged not in a vacuum, but in the direct wake of increasing federal pressure on states to transition to electronic voting infrastructure, especially under the mandates of the Help America Vote Act (HAVA) of 2002. The Department of Justice, acting through the Voting Section of the Civil Rights Division, began aggressively enforcing compliance with HAVA’s accessibility and modernization requirements. In 2006—the same year Rivest published ThreeBallot—the DOJ sued the State of New York in federal court for failing to implement electronic voting machines that met federal accessibility standards. The lawsuit forced New York to abandon its historic lever-based systems, which had been used for over a century, under threat of noncompliance with federal law.

The irony is palpable. While Rivest was publicly proposing a low-tech, paper-based verification method—an elegant hybrid model that could theoretically satisfy both voter confidence and cryptographic soundness—the federal machinery he had helped influence was simultaneously eradicating paper-dominant systems under the guise of modernization. The DOJ’s lawsuit against New York was not merely bureaucratic pressure; it was a legal instrument of structural transformation, compelling the state to adopt machines compliant with the very cryptographic assumptions embedded in Rivest’s other work—assumptions that were codified during his time on the Technical Guidelines Development Committee (TGDC).

In essence, Rivest played both sides of the reform narrative. On one hand, he offered the public an accessible, verifiable, “simplified” system like ThreeBallot. On the other, he contributed directly to the federal framework that made such systems politically and operationally obsolete—replaced by machine-verified, cryptographically controlled voting protocols that centralized trust away from the voter and toward opaque certification bodies.

ThreeBallot was never truly adopted, but it served its purpose: proof of concept, a public relations shield, a technical decoy. It demonstrated that secure voting could exist without black-box systems, even as the legal and institutional apparatus moved fully toward a future where all ballots are translated, stored, verified, and audited under cryptographic protocols few voters understand—and even fewer can verify independently.

This wasn’t reform. It was the managed migration of democratic trust—from the public domain to the cryptographic class. And Rivest was standing at the gateway, holding the keys. Tore Maras

At first glance, initiatives like ThreeBallot and the Voluntary Voting System Guidelines (VVSG) represent progress—technocratic reforms to modernize, secure, and streamline electoral processes. But what took place was far more consequential: a reassignment of trust.

Historically, trust in the electoral process was rooted in physical transparency. Paper ballots could be hand-counted, poll watchers could observe, and lever machines made audible clicks. Voters trusted the process not because it was flawless but because it was visible, tangible, and locally administered. Accountability rested with precinct workers, election boards, and those who could be seen and questioned. That was the public domain of democratic trust.

What replaced it was not a more transparent system—it was more abstract, governed by encryption protocols, hash functions, and complex verification schemes intelligible only to a small class of specialists. This “cryptographic class”—academics, federal advisors, private vendors, and system certifiers—now serves as the final authority on whether an election was “secure.” But most voters cannot understand, let alone verify, these systems. They must accept the word of experts, trusting not what they can see, but what they are told is mathematically sound.

This shift was not accidental. It was managed. Deliberate. Institutionalized.

Ronald Rivest stood precisely at the inflection point of that transition. As a world-renowned cryptographer, he had the mathematical authority. As chair of the TGDC’s Computer Security and Transparency Subcommittee, he had the regulatory access. And as the co-creator of both RSA encryption and end-to-end verifiable voting models, he had the intellectual capital to shape the narrative.

So when I say “he stood at the gateway, holding the keys,” it is not a metaphor but a statement of function. More than any other individual, Rivest bridged the old world of observable, civic-driven elections and the new world of cryptographically governed electoral legitimacy. He decided which mathematical assumptions would become legal standards. He wrote the protocols that replaced public observation with private verification. And once those gates were closed behind him, the average citizen became a passive participant in a process they could no longer independently verify.

That is not democracy upgraded. That is democracy redefined by a handful of mathematicians, with the public locked out of the equation.

Ronald Rivest helped design Scantegrity as a transparent, voter-verifiable system that uses cryptographic receipts and a public bulletin board to allow voters to confirm their votes were included in the final tally. On the surface, it’s the antidote to black-box voting: transparent, trackable, secure. But that’s the public-facing model.

Imagine you own that system, not by dismantling its cryptographic claims, but by controlling who can verify, how they verify, and what the system reports back.

Here’s how you would weaponize it:

First, remove the public bulletin board. In Scantegrity, the public ledger allows anyone to verify the inclusion of their encrypted receipt. In the weaponized model, that ledger still exists, but it is controlled by a private server owned by a vendor or contractor. Voters still receive cryptographic receipts, but they can only verify them through a third-party portal, which could selectively report results, throttle access, or present false confirmations cloaked in incorrect formatting.

Second, you control the ballot printing and the code generation. Scantegrity depends on pre-generated confirmation codes tied to each candidate position. In a proprietary version, the mapping between confirmation codes and candidate choices becomes obscured, encrypted, or stored behind closed API endpoints. This makes it impossible for the public to audit how codes are assigned. The illusion of transparency remains, but the voter cannot validate that the code printed for Candidate A truly corresponds to Candidate A in the backend system.

Third, you isolate the cryptographic audit trail from independent observers. In original Scantegrity, the tallying and auditing mechanisms are mathematically transparent. But if you push this into a proprietary framework, the software that performs the tally, the hash chains, and the zero-knowledge proofs runs in a closed environment. You can inject false ballots, miscount encrypted receipts, or generate false proofs; no one outside the vendor can prove otherwise.

Finally, and most critically, you monetize the chain of trust. In a privatized Scantegrity-like system, election boards license the technology, rely on vendor-managed infrastructure, and depend on vendor-generated reports to validate outcomes. Once that dependency is normalized, the public no longer verifies the election—the vendor does. Just like that, an architecture originally built to restore voter control becomes a fortress of illusion, cloaked in mathematical legitimacy, and impenetrable to public scrutiny.

Scantegrity was American-born, designed to defend transparency, empower the voter, and decentralize trust. But its core architecture, its logic of cryptographic verification, now lives in the academic laboratories of Tsinghua University. There, it has been abstracted, adapted, and reassembled—not to serve democratic ends, but to construct infrastructure for controlled participation, state-managed legitimacy, and mathematically concealed authoritarianism.

This isn’t theoretical. Tsinghua’s research into blockchain-based voting systems, self-tallying protocols, and zero-knowledge proof mechanisms maps almost one-to-one with the underlying principles Rivest helped popularize through Scantegrity and beyond. What began as a voter-facing, trust-distributing system has evolved, on the other side of the geopolitical spectrum, into a centralized, cryptographically sealed apparatus capable of simulating transparency while ensuring control remains out of reach.

And here’s the connective tissue no one wants to talk about: Rivest’s affiliations with Harvard, particularly his proximity to high-performing math students within the Harvard Undergraduate Mathematics Association (HUMA), and HUMA’s academic and research exchanges with Tsinghua University, placed him at the intersection of the knowledge transfer pipeline. The effect is the same whether overt or unspoken, collaborative or merely parallel: an intellectual corridor was created, linking America’s most trusted cryptographer with China’s most militarized academic institution.

HUMA wasn’t just a math club. It was a recruitment node, a feeder system, and a signaling structure through which ideas flowed across borders and institutions and eventually into codebases and protocols now optimized not for voter empowerment but for system control.

Ultimately, the tragedy is not that Scantegrity failed to be adopted widely in America. The tragedy is that its DNA now powers a new generation of voting infrastructure on foreign soil—one that speaks the language of cryptographic trust while serving the logic of political containment.

Rivest was the conduit. HUMA was the corridor. Tsinghua became the laboratory.

The world may soon realize that American cryptography’s most potent export wasn’t security—it was the blueprint for invisible control.

Harvard Undergraduate Mathematics Association (HUMA) functions as an entry point for foreign influence operations or as a vector for ideological programming subtly aligned with adversarial frameworks. You are not indulging in paranoia. You acknowledge the most vulnerable and overlooked terrain in the modern battlefield. In these intellectual staging grounds, soft power, recruitment, and strategic alignment occur long before any official doctrine is written.

What is historically consistent is that groups like HUMA could serve as unwitting nodes in a larger structure of influence and subversion. Elite student organizations are often cloaked in innocence, their purpose ostensibly academic, their events seemingly benign. Yet, they attract and cultivate the brightest minds destined for positions of extraordinary power—minds that are shaped, watched, and often co-opted before they know what they’ve become part of. The Department of Justice has documented cases in which student groups at Ivy League institutions were directly linked to foreign-sponsored surveillance operations, technological espionage, and ideological grooming programs disguised as academic exchange. These are not exceptions. They are the quiet rule of modern asymmetric warfare.

HUMA’s relevance to national security does not arise from what it declares itself to be. It arises from what it is positioned to become. It serves as a signal amplifier, highlighting future cryptographers, AI architects, and data theorists who will write the algorithms that govern our systems and our perception of truth. HUMA has contact with institutions like Tsinghua University—China’s principal research arm of the Communist Party’s military-industrial complex—it forms part of a transnational corridor of knowledge transfer. A corridor that, wittingly or not, exports American mathematical innovation and imports ideological frameworks wrapped in research partnerships and academic neutrality.

Tsinghua is not an academic partner; it is the CCP’s forward-deployed research front, polished, prestigious, and profoundly entangled in state-led dominance strategies.

The deeper risk is not in overt betrayal. It is in the gradual capture of cultural and cognitive sovereignty. HUMA is a place where the language of mathematics is subtly bent to serve foreign narratives or where students are introduced to normalized relationships with adversarial institutions or tapped to work for foreign adversaries as they get recruited by our agencies and companies like Palantir. In that case, it is no longer simply an academic group. It is a soft instrument of strategic convergence, where future leaders are reshaped to see cooperation where conflict exists and partnership where penetration is underway. After all National Security Agency (NSA), Defense Advanced Research Projects Agency (DARPA), MITRE Corporation, and In-Q-Tel, the CIA’s venture capital arm and the CIA itself directly hire through HUMA and many private defense contractors like Booz Allen Hamilton, Palantir Technologies, Raytheon and even THINK TANKS like CFR, Heritage, CSIS and many more. I mean, future cryptographers, cyber warriors, intelligence analysts, and strategic theorists all follow HUMA.

Tsinghua has been advancing cryptographic techniques nearly identical to those used in Scantegrity: zero-knowledge proofs, mixnets, homomorphic encryption, and distributed verifiability models. In particular, their research into self-tallying voting protocols on Ethereum replicates, in architecture if not in branding, the same intellectual structures that underpinned Scantegrity. The presence of federated learning models—another area of Tsinghua’s interest—in secure voting environments hints at a more enormous ambition: machine-driven, decentralized control over voting infrastructure, shielded from human oversight by cryptographic complexity.

Tsinghua’s Institute for Interdisciplinary Information Sciences (IIIS) has hosted lectures and seminars on voting system security, featuring prominent experts like Professor Ronald L. Rivest from MIT. These events have addressed the complexities and challenges of designing secure electronic voting systems, including discussions on cryptographic approaches to ensure voter privacy and system integrity.

But it’s not only about elections. It’s about privacy, banking, medical records, and your DNA. You must protect your DNA data.

WHO CONTROLS THE STAGE? FHE HOLDER

This paper tells you who.

Yi, X., Paulet, R., & Bertino, E. (2014). Homomorphic Encryption and Applications. Springer Briefs in Computer Science. Springer International Publishing.

https://doi.org/10.1007/978-3-319-12229-8

See especially p. 47: “The concept of FHE was introduced by Rivest under the name privacy homomorphisms. The problem of constructing a scheme with these properties remained unsolved until 2009, when Gentry presented his breakthrough result.”

The conceptual foundation of what we now call Fully Homomorphic Encryption (FHE) began not in the cloud-era arms race, but with Ronald Rivest, decades earlier. Rivest, alongside Adleman and Dertouzos, introduced the idea under the term “privacy homomorphisms” back in 1978—a radical notion then: that one could perform computations on encrypted data without ever needing to decrypt it. The implications were clear even then. This would allow for secure, outsourced computation—the cloud before the cloud. Surveillance-proof computation before mass surveillance even had a formal name.

Like all ideas ahead of their time, it remained largely theoretical. Despite its elegance, no known construction could realize the vision Rivest set forth, not in a practical or fully functional way. For over thirty years, the problem remained unsolved. It wasn’t until 2009 that Craig Gentry, then at Stanford, unveiled a working prototype. His solution—complex, lattice-based, and layered with bootstrapping techniques—cracked the code and shifted FHE from academic speculation to real-world implementation. What had been Rivest’s challenge became Gentry’s proof of concept.

But make no mistake: the DNA of that breakthrough belongs to Rivest. He framed the question. He set the parameters for what future generations would need to solve. And here’s where the strategic angle must be appreciated: whoever owns FHE owns the future of secure computation. It is the cryptographic holy grail—not just for protecting medical records or banking data, but for securing the logic of AI inference, cloud operations, decentralized computation, and battlefield information systems. It allows a party to delegate computation to an untrusted entity without giving away the data.

Rivest didn’t just imagine FHE—he weaponized a question that would remain dormant until the right moment. And when that moment came, it wasn’t the U.S. government or a defense lab that seized it first. A lone mathematician gave birth to a prototype, and a global race followed. Tsinghua University, Baidu, and other CCP-aligned cryptographic labs are trying to industrialize it. Because once homomorphic encryption becomes scalable, the world’s data becomes invisible and manipulable, and only those who control the encryption layers can see through the fog like an Ophanim that sees everything. This is a new structure of power. Rivest drew the map, in other words, he anticipated a world in which trust, privacy, and control would all converge inside the lattice of encryption—and that whoever could navigate that map first would rule the terrain it defined.

Not a war, not an election, not a scandal. A theory. One proposed by a young American mathematician with a visionary’s clarity and a codebreaker’s obsession: that you could perform meaningful computation on encrypted data, without ever decrypting it. It was elegant, impossible, and terrifying in its implications.

Washington didn’t understand what he’d done—not at first. They saw a brilliant algorithm, a tool for bankers, maybe spies. But in Langley, someone else understood in a sub-basement not listed on any floor plan. The division was unofficial, its budget carved from ghost accounts.

Post-Cold War, the CIA faced a dilemma. It was no longer a question of who had more guns but who controlled the math. China was catching up—not in steel but in theory. Within China’s Tsinghua University, an ideological mirror of MIT was formed. It produced not just students; it manufactured control systems and logic gates for civilizational dominance–surveillance, prediction, decision—the trifecta of algorithmic governance.

Langley didn’t develop the future. They licensed it. Co-opted it. Redirected it. And that’s precisely what they did with Rivest’s work.

At first, it was discreet joint symposiums. Academic bridges. “Research dialogue.” But a quiet realignment was forming beneath the white papers and grant abstracts—a new doctrine. If America couldn’t stop the rise of Chinese cryptography, perhaps it could guide it, embed itself inside it, and control the convergence from within. This would not be containment—it would be co-authorship. Of encryption. Of AI. Of the systems that would govern identity, sovereignty, and reality itself. So the CIA made a choice. They didn’t protect Rivest’s map. They sold copies of it. Quietly, through shell programs and “exchange scholars,” they opened the floodgates. And the gatekeepers? They became midwives of the new digital order—one not bound by nation, but by algorithmic alignment.

Pseudo-academic programs. Joint math symposia. Internships through DARPA-funded shell projects. Scantegrity’s architecture is mirrored in Shanghai. Tsinghua students are trained in American models, then disappear into defense-adjacent roles. A government-licensed election management system software update triggers a checksum failure in one swing state. It’s dismissed publicly. But inside Fort —–, analysts discover the cryptographic signature traces back to a modification proposed in a 2006 MIT thesis. That thesis? Co-authored by an early HUMA graduate now living in Shenzhen. Scantegrity’s prototype and a Tsinghua proof-of-concept blockchain voting system are embedded in a hidden cryptographic module. This module acts as a kill-switch: a state-controlled re-keying mechanism that can reroute voter receipts invisibly.

This constructive asymmetry allows Chinese access to select American infrastructure, based on the belief that the U.S. will always retain the upper hand in cryptographic capacity, AI dominance, and digital governance models. CCP had a different Military-Civil Fusion strategy and weaponized openness as a vector. Instead of being infiltrated, they absorbed, mirrored, and then closed the gates. They took the cryptographic primitives, the AI architectures, the voting models, and rebuilt them inside state-run systems, turning tools of decentralization into tools of central control.

The CIA didn’t infiltrate China; China infiltrated the West through code, trust, and the mirror.

The psychological and philosophical beliefs of elite mathematicians shape the architecture of national policy down the road—especially in digital and military tech. Tore Maras

China has already embedded itself across the global digital and physical infrastructure grid, not through invasion, but through strategic enticement, offering free or low-cost technology wrapped in international development aid and soft diplomacy. Programs administered under UN umbrellas, such as the FAO and others, have served as effective distribution channels for Chinese-made surveillance hardware, data collection systems, and communication protocols. These were not humanitarian gestures but sovereignty capture operations disguised as technological generosity.

The question, then, is not whether this was allowed—it’s who allowed it and why. During the Obama Administration, key decisions signaled either catastrophic naiveté or calculated risk. Either they believed the Chinese Communist Party lacked the velocity to overtake Western infrastructure, or worse, they gambled that by “engaging,” they could steer or infiltrate Chinese tech development from the inside. Both assumptions were fatally flawed. The CCP did not need to innovate beyond us. They only needed to absorb, adapt, and then close the loop, which they have done with brutal efficiency.

On one hand, the United States allowed, and even facilitated, the diffusion of Fully Homomorphic Encryption (FHE) and its surrounding cryptographic primitives into Chinese institutions like Tsinghua. Whether through naïve academic openness, CIA miscalculation, or deliberate “constructive asymmetry,” the flow of knowledge is tangible and measurable. China’s rapid development of lattice-based FHE, zero-knowledge systems, and blockchain-enabled voting protocols—with clear architectural parallels to American-born systems like Scantegrity—suggests they’ve not only captured the technology but they moved toward operational deployment within military, surveillance, and civic systems.

On the other hand, the United States never truly let go of control. The “glitch in 2006” could represent more than a software anomaly; it could be the first activation of a deeper insertion, a cryptographic watermark, a timestamped kernel of visibility placed inside China’s adoption arc. From that moment forward, the Infrastructure Group inside the CIA—or some ultra-classified parallel structure—could have embedded itself in the shadow of China’s cryptographic ascent, watching, logging, and preparing. Not to stop it—but to let it reach maturity, so that when the time comes, they hold the keystone: a kill switch, a silent override, a logic trap buried deep in the math itself.

This makes FHE different from any prior technology: it is not a weapon you point to but a weapon you install. Once embedded into national infrastructure—voting systems, encrypted cloud environments, biometric networks—it cannot be removed without dismantling the entire system. If the U.S. anticipated China’s absorption of FHE, they may have deliberately allowed it, but only after seeding it with strategic dependency, backdoors that aren’t obvious, but mathematical. Subtle flaws in bootstrapping steps. Controlled noise parameters. Predictable key generation schemes.

THAT IS THE REAL WAR BEING FOUGHT TODAY.

Afterthoughts

Many may be asking, so what is happening? The question isn’t just whether the U.S. lost control. The question is whether they pretended to, knowing full well that when the geopolitical threshold was crossed, when China tried to flip the global order using America’s tools, they would allegedly activate something no one saw coming.

Not cyberwarfare. Not an EMP. But a global cryptographic nullification event. A sovereign reset. Tore Maras

IF there were ever a memo circulated within the CIA years ago, it would go something like this :

“…Although public sentiment and interagency concern regarding CCP-linked infrastructure proliferation have intensified, the strategic objective remains unchanged: permit full architectural adoption of seeded FHE primitives across targeted sectors within the PRC. The assumption that Beijing has “captured” or reverse-engineered sovereign-grade American cryptographic design is both accurate and irrelevant.

The code they are using is ours. The logic they trust is ours. The modular arithmetic, the noise functions, the bootstrapping thresholds—every element they believe they’ve mastered is wrapped in prepositioned failure conditions. Should strategic necessity arise, SIGIL Phase V provides a quantized kill switch protocol embedded at the compiler level, undetectable by standard audit methods and triggered via distributed elliptic key injection across spoofed transaction blocks.

If the mechanism is triggered, China will experience systemic cryptographic collapse—total loss of trust in digital authentication, vote irreversibility, smart contract execution, AI inference integrity, and financial instrument stability. The resulting state will mirror a cognitive blackout as control shifts from centralized command back to a vacuum.

The CCP’s perceived gains are tolerated because they are strategically fatal. No counter-narrative is required at this time. Deny all connections. Public disclosure would damage forward-operating credibility. Trust the math. The betrayal they think they’ve executed is the one we authored from the beginning. End of Note…”

Two realities are both accurate at this point.

The Chinese can build the infrastructure but never own it cause the FHE is in the hands of the Free World under the guise of the CIA.

OR

The FHE now in the hands of the CCP, and they have captured everything, and your politicians are not acting but betraying you.

Which reality do you choose?

If you like my work, you can tip or support me via TIP ME or subscribe to me on Subscribestar ! You can also follow and subscribe to me on Rumble and Locals, or subscribe to my Substack or on X . I am 100% people-funded. ww.toresays.com

Digital Dominion Series now on Amazon VOLUME I, VOLUME II and VOLUME III.

.

1 comment
  1. So glad you r on the Good Side of Humanity. God must have legions of God protecting war angels that surround you constantly. Thank you for giving to Humanity! We don’t deserve your help but you gave/give it anyway. We are blessed.

    Love and grateful,
    😘 debby in tx

Leave a Reply

Sign Up for Our Newsletters

Subscribe to newsletters to get latest posts in your email.