Automacoin, the computation of all computations
The first references to this project and the Automacoin name came out of Dr. Hector Zenil and Dr. Narsis Kiani's course on Algorithmic Information Dynamics offered at the Santa Fe Institute Complexity Explorer in 2018. Automacoin was conceived to be the first cryptocurrency designed to be Turing-universal and also the most eco-friendly of all cryptocurrencies as its computations are not only the most general-purpose and fundamental but are stored and never lost or wasted so they remain available increasing in value over time as more accurate estimations are generated by the Automacoin network.
The Automacoin network behind the cryptocurrency computation is not only meaningful but it can, in a fundamental way, be considered to be the ultimate meaningful task that intelligent beings can perform before the thermodynamic death of the universe, that is to use all the energy left in the universe to answer all possible questions by mechanistic means (i.e. that can be followed by computational chains of cause-and-effect) including, potentially and speculatively, for example, how to extend the thermodynamic life of the universe itself or find the longest path of its Hamiltonian (some have suggested this solves Fermi's paradox as most civilisations are dormant waiting to compute when the universe is the coolest).
In all these fundamental senses, Automacoin can be thought of as the mother of all computations and a generalisation of all other research-led (and otherwise) highly commendable coins such as CureCoin or FoldingCoin. For more purpose-specific research it still makes sense to support coins like CureCoin and FoldingCoin but Automacoin can be thought as computing all possible simulations not only for protein folding but for everything else, including perhaps a simulation of our own universe to which Automacoin also aims to contribute with and can potentially verify or find evidence in favour of a very simple computable unified field theory of physics (see https://www.wolframphysics.org/) for which Automacoin is naturally equipped for and can reach.
The Automacoin network behind the cryptocurrency computation is not only meaningful but it can, in a fundamental way, be considered to be the ultimate meaningful task that intelligent beings can perform before the thermodynamic death of the universe, that is to use all the energy left in the universe to answer all possible questions by mechanistic means (i.e. that can be followed by computational chains of cause-and-effect) including, potentially and speculatively, for example, how to extend the thermodynamic life of the universe itself or find the longest path of its Hamiltonian (some have suggested this solves Fermi's paradox as most civilisations are dormant waiting to compute when the universe is the coolest).
In all these fundamental senses, Automacoin can be thought of as the mother of all computations and a generalisation of all other research-led (and otherwise) highly commendable coins such as CureCoin or FoldingCoin. For more purpose-specific research it still makes sense to support coins like CureCoin and FoldingCoin but Automacoin can be thought as computing all possible simulations not only for protein folding but for everything else, including perhaps a simulation of our own universe to which Automacoin also aims to contribute with and can potentially verify or find evidence in favour of a very simple computable unified field theory of physics (see https://www.wolframphysics.org/) for which Automacoin is naturally equipped for and can reach.
In fact, the idea is so powerful that one of the considered greatest founding fathers of modern AI, Prof. Marvin Minsky, said in a panel discussion on The Limits of Understanding, at the World Science Festival in NYC on Dec 14, 2014 the following only a few months before passing away and after not having followed his own (unfortunately perhaps literally) last advice:
It seems to me that the most important discovery since Gödel was the discovery by Chaitin, Solomonoff and Kolmogorov of the concept called Algorithmic Probability which is a fundamental new theory of how to make predictions given a collection of experiences and this is a beautiful theory, everybody should learn it, but it’s got one problem, that is, that you cannot actually calculate what this theory predicts because it is too hard, it requires an infinite amount of work. However, it should be possible to make practical approximations to the Chaitin, Kolmogorov, Solomonoff theory that would make better predictions than anything we have today. Everybody should learn all about that and spend the rest of their lives working on it.
So Minsky says we should spend our lives learning and calculating this, which Automacoin enables and embraces.
"The answer to everything"
To be a self-sustaining project that can fund itself, an end goal of Automacoin is similar to other successful coins, to have enough competition that the coin price goes into financial gain traded for real money. However, the greatest reward is knowledge-driven and is intellectual in essence. A miner reward is to have access to the output of the computation that the whole network is running, because the network is building an all-powerful Oracle able to, in principle, answer every possible question formulated in the form of a computer program (which under some assumptions could be simply every question possible).
Indeed, what the Automation network computes are pieces of Deep Thought in reference to Douglas Adams' The Hitchhiker's Guide to the Galaxy. Or, more technically, it also computes a Chaitin Omega number, also known as the infinite wisdom number because it can tell whether a computation will halt and thus answer every computable Yes/No question imagined. But not only we compute that but the output (answer) itself and the computation that led to such an answer so we never have to face an answer like '42' and be completely puzzled. What miners can exchange or can be approached by others, is to gain enough coins to have access to the Automacoin priceless database with such answers for research purposes. Indeed, dozens of papers, if not hundreds of them, have already been written taking advantage of results of the kind of output that Automacoin can produce in order to advance science.
Some of these papers include a contribution to problems similar to those of protein folding and perhaps only next in importance to protein folding in the field of molecular biology, such as nucleosome positioning key to understand gene regulation (https://academic.oup.com/nar/article/47/20/e129/5568206). Some other papers have shown how the computation of Automacoin can be exploited in AI research and machine intelligence (https://arxiv.org/abs/1910.02758) and be combined with statistical machine learning, and deep learning in e.g. deep generative models, among other things, by replacing or complementing algorithmic information loss/regularisation techniques and what we call causal deconvolution of which the journal Nature produced a video (https://www.youtube.com/watch?v=rkmz7DAA-t8&feature=youtu.be) by their own initiative to highlight what they thought were very relevant results in a paper published by the journal (https://livingsystems.kaust.edu.sa/docs/default-source/publications-documents/causal-deconvolution-by-algorithmic-generative-models.pdf?sfvrsn=6b037360_2). In general, our computation makes any other computation more meaningful and interpretable. It is the ultimate computation in practice and theory.
Indeed, what the Automation network computes are pieces of Deep Thought in reference to Douglas Adams' The Hitchhiker's Guide to the Galaxy. Or, more technically, it also computes a Chaitin Omega number, also known as the infinite wisdom number because it can tell whether a computation will halt and thus answer every computable Yes/No question imagined. But not only we compute that but the output (answer) itself and the computation that led to such an answer so we never have to face an answer like '42' and be completely puzzled. What miners can exchange or can be approached by others, is to gain enough coins to have access to the Automacoin priceless database with such answers for research purposes. Indeed, dozens of papers, if not hundreds of them, have already been written taking advantage of results of the kind of output that Automacoin can produce in order to advance science.
Some of these papers include a contribution to problems similar to those of protein folding and perhaps only next in importance to protein folding in the field of molecular biology, such as nucleosome positioning key to understand gene regulation (https://academic.oup.com/nar/article/47/20/e129/5568206). Some other papers have shown how the computation of Automacoin can be exploited in AI research and machine intelligence (https://arxiv.org/abs/1910.02758) and be combined with statistical machine learning, and deep learning in e.g. deep generative models, among other things, by replacing or complementing algorithmic information loss/regularisation techniques and what we call causal deconvolution of which the journal Nature produced a video (https://www.youtube.com/watch?v=rkmz7DAA-t8&feature=youtu.be) by their own initiative to highlight what they thought were very relevant results in a paper published by the journal (https://livingsystems.kaust.edu.sa/docs/default-source/publications-documents/causal-deconvolution-by-algorithmic-generative-models.pdf?sfvrsn=6b037360_2). In general, our computation makes any other computation more meaningful and interpretable. It is the ultimate computation in practice and theory.
The ultimate compression algorithm
Another more practical goal of Automacoin is to help build the ultimate compression algorithm, the first ever incremental open compression algorithm unlike current popular statistical compression algorithms that are not only black-boxes in most cases and closed programs unable to indefinitely improve compression over any possible input, which the Automation compression algorithm is able to do by way of the two methods on which it is based that combine the basics of algorithmic complexity (called Coding Theorem method, or improved variations of it) and traditional information theory (called Block decomposition method, or improved variations of it). The way in which this compression works is by querying an Automacoin database consisting of the estimated Universal Distribution (under the optimality assumptions of underlying the reference enumeration).
The database itself is infinite and evidently will never be completed (similar to Borges' infinite library) but for every piece of data there is always a finite sequence of smaller pieces of data (e.g. substrings) for which the database is populated with approximated values able to reproduce the larger piece of data by mechanistic means with a generative program (and unlike Borges' library which can be proven to be full of meaningless information or data, i.e. garbage) hence providing, in principle, a lower bound of the larger piece compressibility based on different Turing-universal models of computation that Automacoin will support in the future (starting by binary Turing machines from smaller to larger number of rules and states used) and based on an ever incrementally self-improving algorithm that the Automacoin network helps to expand to provide tighter bounds for that universal model.
The incentive is based on rewarding people based on what we call TuringCreds that ultimately gives full access to the computation output but only gives future access if the miner continues to work for the network. Every miner starts with the maximum number of TuringCreds but can see them reduced if computational shortcuts are taken. Shortcuts to produce more Automacoins are guaranteed to be impossible in general thanks to very strong irreducibility mathematical theorems starting from the strongest one based on Turing's undecidability of the halting problem.
However, some easy shortcuts involving potentially many Turing machines are possible (e.g. infinite cycles and non-halting machines) and the network will penalise shortcutting for the following reasons: (1) because the community wants the network to embrace some fairness principles, including inclusiveness of people with more limited access to methods or large computing resources, and (2) because a verifier that is chosen at random may not have that advantage and will end-up computing more than the original miner and be rewarded less than deserved. One way to discourage shortcutting is simply by rewarding by number of computed steps. On the other hand, we want to embrace and aggregate clever algorithms for the network to compute faster the ultimate Oracle, so shortcutting is allowed if algorithms are added to the Automacoin algorithms' database for which miners will be instantly rewarded by a multiplicative factor, and the network will use those algorithms to distribute those machines for which a pre-analysis of its the program structure (e.g. a rule table of a Turing machine) is performed and the shortcut instructions are either sent to the miner or simply skipped if the output can be predicted (e.g. non-halting).
The database itself is infinite and evidently will never be completed (similar to Borges' infinite library) but for every piece of data there is always a finite sequence of smaller pieces of data (e.g. substrings) for which the database is populated with approximated values able to reproduce the larger piece of data by mechanistic means with a generative program (and unlike Borges' library which can be proven to be full of meaningless information or data, i.e. garbage) hence providing, in principle, a lower bound of the larger piece compressibility based on different Turing-universal models of computation that Automacoin will support in the future (starting by binary Turing machines from smaller to larger number of rules and states used) and based on an ever incrementally self-improving algorithm that the Automacoin network helps to expand to provide tighter bounds for that universal model.
The incentive is based on rewarding people based on what we call TuringCreds that ultimately gives full access to the computation output but only gives future access if the miner continues to work for the network. Every miner starts with the maximum number of TuringCreds but can see them reduced if computational shortcuts are taken. Shortcuts to produce more Automacoins are guaranteed to be impossible in general thanks to very strong irreducibility mathematical theorems starting from the strongest one based on Turing's undecidability of the halting problem.
However, some easy shortcuts involving potentially many Turing machines are possible (e.g. infinite cycles and non-halting machines) and the network will penalise shortcutting for the following reasons: (1) because the community wants the network to embrace some fairness principles, including inclusiveness of people with more limited access to methods or large computing resources, and (2) because a verifier that is chosen at random may not have that advantage and will end-up computing more than the original miner and be rewarded less than deserved. One way to discourage shortcutting is simply by rewarding by number of computed steps. On the other hand, we want to embrace and aggregate clever algorithms for the network to compute faster the ultimate Oracle, so shortcutting is allowed if algorithms are added to the Automacoin algorithms' database for which miners will be instantly rewarded by a multiplicative factor, and the network will use those algorithms to distribute those machines for which a pre-analysis of its the program structure (e.g. a rule table of a Turing machine) is performed and the shortcut instructions are either sent to the miner or simply skipped if the output can be predicted (e.g. non-halting).
Pre-automacoin Epochs
Automacoin has already used some of the fastest computers around the world to produce its first approximations, we are at what we call the D6 epoch after ending the D5 epoch in 2014 (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0096223) and its programs (https://www.algorithmicdynamics.net/software.html) and data were published (http://complexitycalculator.com/). The D5 epoch lasted N=6 years, with each epoch defined as the period between epochs ending when the pre-Automacoin network calculation (based on a local computer farm or supercomputer) of the distribution of all Turing machines with N states ended. D4 was calculated in a top-end Apple laptop in about 3 months around 2008 but when D5 was calculated D4 could be calculated in only a week because of the increase of resources (a supercomputer farm in Seville). The D5 epoch lasted officially 2 years, from 2012 to 2014 and each epoch expectedly takes longer despite the increase of computational resources because it is a mathematical fact that the calculation of Automacoin can be proven to grow faster asymptotically than any computable function (equivalent to what is called the Busy Beaver function), a Cray XC40 that was at the beginning of the D5 epoch the 7th fastest computer in the world Shaheen II (https://en.wikipedia.org/wiki/Shaheen_(supercomputer). The cost of the calculation of D6, was almost 1 million dollars for more than 50,000,000,000,000 machine hours of computation time at a rate of 64,505,074,000 Turing machines per hour (of a total of 232,218,265,089,212,416) kindly awarded by the King Abdullah University of Science and Technology (KAUST) sponsored by the Living Systems Lab led by Prof. Jesper Tegnér and supervised by the Algorithmic Dynamics Lab at the Karolinska Institute led by Dr. Hector Zenil and Dr. Narsis Kiani and their student Victor Guiochin. A lot more people have been involved in the pre-Automacoin era to be found at https://www.algorithmicdynamics.net/software.html
Videos about the the Science behind Automacoin and its potential applications
Technical Papers
- Causal Deconvolution by Algorithmic Generative Models
Nature Machine Intelligence, vol 1, pages 58–66, 2019 [online, video] (NPG) - A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity
Entropy 20(8), 605, 2018. [online] (MDPI) - Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability
International Journal of Parallel Emergent and Distributed Systems, 2018 [online, preprint] (Taylor & Francis) DOI: 10.1080/17445760.2018.1448932 - A Computable Measure of Algorithmic Probability by Finite Approximations with an Application to Integer Sequences
Complexity vol. 2017 (2017), Article ID 7208216 [online, preprint] (Wiley/Hindawi) - Two-Dimensional Kolmogorov Complexity and Validation of the Coding Theorem Method by Compressibility
PeerJ Computer Science, 1:e23, 2015. [online] (PeerJ) - Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines
PLoS ONE 9(5): e96223, 2014. [online, data, online program, Mathematica API, R package] doi:10.1371/journal.pone.0096223 (PLOS) - Correlation of Automorphism Group Size and Topological Properties with Program-size Complexity Evaluations of Graphs and Complex Networks
Physica A: Statistical Mechanics and its Applications, vol. 404, pp. 341–358, 2014. [online, preprint, video]
doi: 10.1016/j.physa.2014.02.060 (Elsevier) - Correspondence and Independence of Numerical Evaluations of Algorithmic Information Measures
Computability, vol. 2, no. 2, pp 125-140, 2013. doi:10.3233/COM-13019 [online, preprint] (IOS) - Numerical Evaluation of the Complexity of Short Strings: A Glance Into the Innermost Structure of Algorithmic Randomness
Applied Mathematics and Computation 219, pp. 63-77, 2012. [preprint] doi:10.1016/j.amc.2011.10.006 (online 2011) (Elsevier)