Project Axiomatic - A Universal Computational Paradigm Integrating Surreal Numbers, Prime Factorization, and Generative Neural Models
Author
Richard Goodman
Date Published

Abstract
The expressive power of modern artificial intelligence is often constrained by its reliance on conventional numerical representations, such as floating-point numbers. These representations, while efficient, obscure the deeper, axiomatic structures of mathematics, limiting a model's capacity for abstract reasoning. This paper introduces Project Axiomatic, a research initiative dedicated to developing a universal computational paradigm that transcends these limitations. Our approach is a synthesis of three distinct but complementary mathematical frameworks: the recursively constructed universe of Surreal Numbers, the unique multiplicative anatomy of Prime Factorization, and the verified axioms of custom-built Analytic Fields. We demonstrate the construction of surreal numbers and the verification of Euler's formula within a custom ComplexSurreal class, establishing the soundness of our approach. We then introduce a novel methodology for representing numbers as vectors within a prime exponent latent space and present a neural architecture designed to learn the highly non-linear rules of arithmetic within this space. Finally, we outline the future trajectory of this research, including the development of adaptive "Rational Function Neural Networks" and generative models built on Graph Neural Networks that aim to learn the axiomatic rules of the surreal number construction itself. This work charts a course toward a new class of AI models capable of a deeper, more structural understanding of mathematics, with profound implications for analyzing complex systems like financial markets, as well as for the fields of cryptography and fundamental science.
View Related Publications
GitHub Repo : https://github.com/Apoth3osis-ai/project_axiomatic
1. Introduction
The remarkable success of deep learning is largely a story of statistical pattern recognition at a massive scale. However, the path toward artificial general intelligence requires a transition from mere pattern matching to genuine abstract reasoning. A fundamental bottleneck on this path is the very language of modern computation: the floating-point number. While a triumph of engineering, the standard float is a finite approximation that discards the rich, axiomatic, and often infinite nature of the mathematical objects it represents. An AI that only "sees" numbers as decimals on a line is blind to the combinatorial, multiplicative, and analytical structures that underpin them.
This paper presents a novel paradigm developed at Apoth3osis to address this foundational challenge. We seek to build computational and machine learning systems upon a richer and more complete mathematical substrate. Our paradigm is a deliberate synthesis of three powerful concepts:
The Surreal Number System: A recursively defined, universally encompassing ordered field that provides a combinatorial "fingerprint" for every number, connecting it to a vast, graph-like definitional hierarchy.
Prime Factorization: The unique decomposition of numbers into their multiplicative atoms, offering a foundational vector space representation that transforms multiplication and division into linear operations.
Custom Analytic Fields: The construction of bespoke computational classes from first principles, verified against fundamental axioms like Euler's formula, ensuring robustness and mathematical purity.
By integrating these perspectives, we aim to create AI models that operate not on opaque approximations, but on representations that are structurally and axiomatically rich. This paper details our initial explorations in this domain. We will demonstrate the construction and verification of our foundational classes, introduce a neural architecture trained to learn arithmetic in a prime exponent latent space, and discuss the profound applications and future work stemming from this research, including the modeling of complex financial systems and the development of generative models for mathematics itself.
2. The Surreal Number System as a Foundational Framework
The surreal numbers, discovered by John H. Conway, are generated by a single, recursive construction rule that is both simple and extraordinarily powerful. Every surreal number, x, is defined as a pair of sets of previously created surreal numbers—a Left set (L) and a Right set (R)—written as x { L | R }. The only constraint is that no member of L is greater than or equal to any member of R.
This process begins on "Day 0" with the creation of 0 { | } from the empty set. On "Day 1," we can form 1 { 0 | } and 1 { | 0 }. On "Day 2," numbers like 2 { 1 | } and 1/2 { 0 | 1 } are born. This day-by-day construction process, when continued transfinitely, generates a dizzying universe of numbers that includes not only all real numbers but also a rich hierarchy of infinites and infinitesimals.
For our purposes, the surreal number system offers three critical advantages:
Universal Representation: It is the largest possible ordered field, providing a single, unified framework for all the numbers we might need to model.
Combinatorial Structure: Every number is defined by its relationship to "simpler" numbers. This creates a well-founded, directed acyclic graph (DAG) structure, where every number is a node with edges pointing to its definitional ancestors. This structure is an ideal substrate for Graph Neural Networks (GNNs).
Infinitesimal Calculus: The native presence of infinitesimals allows for a more rigorous and potentially more stable foundation for calculus, moving beyond the limit-based definitions of standard analysis.
Our implementation provides a module to construct a finite surreal representation for any given rational number, demonstrating how integers, dyadic rationals (m/2^n), and approximations of other rationals can be visualized as nodes within this cosmic graph.
3. Verifiable Analytic Approximations in Custom Fields
To leverage these advanced mathematical structures, we cannot rely on the inherent imprecision of standard floating-point arithmetic. It is essential to build custom computational classes from first principles, ensuring they adhere strictly to mathematical axioms.
As a demonstration and validation of this methodology, we implemented a Surreal class (using Python's Fraction for perfect rational arithmetic) and a ComplexSurreal class built upon it. To test the robustness and coherence of this custom field, we tasked it with verifying one of mathematics' most elegant identities: Euler's formula.
eiθ=cos(θ)+isin(θ)
This identity is the bedrock of modern signal analysis, connecting algebraic exponentiation with geometric rotation. To compute transcendental functions like exp, sin, and cos within our custom class, we implemented their Taylor series expansions. For example:
cos(x)=∑n=0∞(2n)(−1)nx2n=1−2x2+4x4−…
Since we cannot compute an infinite sum, our seriessum function truncates the calculation once the magnitude of the terms falls below a specified precision threshold, epsilon (ε). Our implementation successfully verified that, for a given rational theta, the computed value for e^(itheta) was approximately equal to the value of cos(theta) isin(theta) within the epsilon tolerance.
This successful verification is a critical result. It proves that we can construct reliable, axiomatically-sound computational objects capable of handling advanced analysis. This forms a trusted foundation upon which more complex machine learning architectures can be built.
4. Prime Factorization as a Latent Vector Space
The Fundamental Theorem of Arithmetic states that every integer greater than 1 can be represented as a unique product of prime1 numbers. This concept can be extended to all positive rational numbers by allowing negative exponents. This provides a powerful, alternative representation of a number—not as a position on a number line, but as a vector of exponents over a basis of primes.
For example, using the prime basis 2, 3, 5, 7, ..., the number 45/14 can be decomposed as (3^2 5^1) / (2^1 7^1) 2^-1 3^2 5^1 7^-1, which corresponds to the exponent vector -1, 2, 1, 1, 0, ....
Our central hypothesis is that this prime exponent space may serve as a more effective latent representation for machine learning models. Whereas the decimal system obscures mathematical relationships, this vector space transforms multiplication and division into simple vector addition and subtraction. While addition and subtraction in this space become highly non-linear, we posit that a sufficiently powerful neural network can learn these complex, emergent rules.
Our codebase includes a complete pipeline for this process:
Generating a large dataset of rational numbers.
Computing their prime factorizations.
Establishing a consistent prime-to-index mapping for the entire dataset.
Converting each number's factorization into a canonical vector representation.
This vectorized dataset forms the input for the neural learning experiment described in the next section.
5. A Neural Architecture for Learning Arithmetic
The core of our initial investigation was to determine if a neural network could learn the rules of arithmetic in the prime exponent latent space. We designed an experiment to teach a model a single operation: addition.
The task is deceptively difficult. Given two input vectors representing numbers a and b, the model must output a third vector that represents a b. There is no simple linear relationship between the input vectors and the output vector. For the model to succeed, it must implicitly learn the complex interplay between the multiplicative (prime) and additive structures of arithmetic.
We constructed a Multi-Layer Perceptron (MLP) in TensorFlow with two parallel input branches, which are concatenated and processed through several hidden layers with normalization and dropout. The model is trained to minimize the mean squared error between its predicted output vector and the true vector of the sum.
A model that successfully performs this task has not merely memorized a function; it has developed an internal model of how prime factorizations are transformed under addition. This is a foundational step toward AI that can understand and manipulate mathematical structures, not just statistical correlations. It suggests that neural networks can, in fact, learn to approximate the rules of number theory from data.
6. Potential Applications
While our research is foundational, the resulting paradigm has significant potential applications across multiple domains:
Financial Market Analysis: This is the primary target domain. The multifaceted representations can be used to engineer highly informative features for time-series forecasting. The surreal combinatorial structure could capture long-range, path-dependent patterns; the prime factorization view could model volatility and multiplicative scaling effects; and the analytic view is ideal for cyclical and oscillatory patterns. Furthermore, the goal of modeling market dynamics as a formal axiomatic system could lead to new methods for risk assessment and algorithmic verification.
Cryptography and Cybersecurity: A deep, structural understanding of prime numbers and factorization is the basis of modern public-key cryptography. An AI that can reason in a prime exponent space could potentially be a powerful tool for analyzing the security of existing cryptosystems or even assisting in the design of new ones.
Fundamental Physics and Signal Processing: Many areas of physics, particularly quantum mechanics, involve complex-valued wave functions and non-trivial analytic structures. A computational paradigm with native, robust handling of complex fields and infinitesimals could provide a superior environment for simulation and discovery.
7. Future Work and Implications
Project Axiomatic is an ongoing initiative. The work presented here is the first step on a longer and more ambitious roadmap. Our future research is focused on two key areas:
Rational Function Neural Networks: We are designing architectures where the activation functions are not fixed (like ReLU or tanh) but are themselves rational functions (P(x)/Q(x)) whose polynomial coefficients are learned during training. This would allow a network to dynamically adapt the complexity and shape of its own nonlinearities, tailoring itself to the problem domain.
Generative Graph Networks on Surreal Structures: The ultimate goal is to leverage the DAG structure of the surreal numbers. We plan to build Graph Neural Networks (GNNs) that operate directly on this definitional graph. The objective is to move beyond prediction and towards generation: to train a model that learns the recursive generative grammar of the surreal construction. Such a model could, in theory, explore the mathematical universe, proposing and characterizing new numbers and structures that adhere to the foundational axioms.
Implications of this Research:
For Artificial Intelligence: This research represents a departure from purely statistical, black-box models toward architectures that are more transparent, verifiable, and grounded in axiomatic reasoning. It is a step toward building AI that can "understand" the systems it models.
For Mathematics: This paradigm offers a new computational tool for mathematical exploration. An AI capable of learning the generative rules of a mathematical system could become an invaluable partner for human mathematicians, helping to discover patterns, formulate conjectures, and explore complex structures.
For Apoth3osis: This initiative solidifies our position at the vanguard of foundational AI research. The proprietary architectures and paradigms developed under Project Axiomatic will provide a durable competitive advantage, enabling us to solve our clients' most complex problems with a new class of intelligent systems.
8. Conclusion
The limitations of current machine intelligence are not merely algorithmic, but representational. By restricting our models to a narrow, imprecise view of numbers, we restrict their capacity for deep reasoning. Project Axiomatic is our endeavor to break free from these constraints. By weaving together the universal hierarchy of surreal numbers, the atomic perspective of prime factorization, and the axiomatic rigor of custom analytic fields, we are constructing a new computational foundation for AI. Our initial results demonstrate the viability of this approach and illuminate a path toward models that can learn the abstract and generative rules of mathematics. This is the future of intelligent systems, and at Apoth3osis, we are committed to building it.
9. References
Conway, J. H. (2001). On Numbers and Games (2nd ed.). A K Peters/CRC Press.
Apoth3osis R&D. (2024). Internal Report: Verifiable Field Axioms in Custom Computational Classes.
Apoth3osis R&D. (2025). Internal Report: Learning Arithmetic in Prime-Component Latent Spaces.
Related Projects

This paper introduces a novel architectural paradigm where standard activations are replaced by a flexible framework of rational functions.

Ontological Mathematics presents a radical and compelling vision of the universe as a self-describing and self-organizing mathematical entity.

Utilizing Cellular Automata (CA) to model and predict market behavior not as a regression task, but as a process of organic, rule-based growth.