ISSN 0964-5640

FRACTAL REPORT 43

And Innovative Computer Applications



A Major Change for Fractal Report 2

Editorial, Announcements and Letters 4

Chaotic Justice G.T. Swain 9

Book Review: Pickover: VISUALIZING BIOLOGICAL INFORMATION Dr Gabriel Landini 12

What is Chaos Yvan Bozzonetti 13

Can Computing Devices Beat Physical Laws Yvan Bozzonetti 14

Impact of Computing Power Yvan Bozzonetti 20

The Statute of Virtual Worlds Yvan Bozzonetti 21

Bagula's Programs Roger Bagula 24

Tent operations on IFS Sierpinski Malcolm Lichtenstein 32



Fractal Report is published by Reeves Telecommunications Laboratories,

West Towan House, Porthtowan, Truro, Cornwall TR4 8AX, United Kingdom.

Internet: [email protected]

Volume 8 no 43 First published September 1996. ISSN 0964-5640.



Major Change for Fractal Report



For some while I have been considering the future of Fractal Report. The number of people who are supporting it in terms of articles and subscriptions has been falling, although the fall off rate has slowed.



The rise of the Internet (see editorial) has been the cause, but it may also be the salvation. What I propose is to put Fractal Report on the world wide web as well as on paper. There will be limitations to start with and articles that are not in machine readable format will not appear, ie the web issue could be shorter than the paper one. However the increased readership from people on the Internet should also bring an increased and varied flow of articles.



In future, the following form of precedence will be applied to articles:



1. articles supplied by email or on disk in html format. The article will have pages of the form *.htm containing the text and *.gif for the images, with the images linked to the text. Zip and Gif files if sent over ordinary email must, of course, be uuencoded. Such articles can also include links to your own website if desired, but as the paper issues of Fractal Report will continue, the articles should be as self sufficient as possible.

Java applets may be included in articles so that readers can try programs on line, but please supply listings as text files also for the paper edition. Bear in mind, though, most if not all paper readers will prefer BASIC listings - many have obsolete computers and cannot afford things like Windows95, Microsoft Explorer, Netscape etc. (Microsoft Explorer is not really free - it will only run with Windows95 and can be seen as an interim upgrade thereof.)



2. articles supplied on disk or by email not in html format. These should be in the format of

directory: <article name>

contents: <article name>.txt

<image1>.gif

<image2>.gif .. etc

<article name>.bas for BASIC program

do not jumble articles and images on different subjects



3. Articles of text and images only clearly printed suitable for scanning. I will not scan in listings because of the risk of errors.

4. I will have to be very desperate to accept articles printed on "lavatory paper" with bald ribbons and hand written corrections, and if this is all I can get I see little future for Fractal Report.



This will produce the current issue, with illustrations in colour of course. Past issues are mainly only paper editions and are unlikely to appear on the web, and also the current issue will be deleted when it gets out of date and the next one is posted. So you will need to print it out, save it to disk or keep up your paper subscription. If you have a colour printer, you will be able to print it with illustrations in colour, of course.



Advertisements.



If Fractal Report on the web becomes popular, then we may be able, with a clear conscience, to charge for advertisements. This should compensate for those who cancel their paper subscriptions. A publically visible counter will be fitted to the web page.



Format of paper issue



This may change to be simply a run of web pages printed out - a lot depends on how well WordPerfect 7.0 will convert HTML into WordPerfect editable documents and how easy it will be to merge the two formats. The main change could be a loss of the two column right justified format and each article will no longer occupy an integral number of pages, and each issue may not have a fixed number of pages.



If anyone in non-uk countries is interested in running a print and snail mail service using the web pages I would be interested in discussing it. It could be cheaper than snailing them from the UK.



Remuneration of Authors



Previously Fractal Report authors were remunerated by free subscriptions to the previous volume. This will cease as it is anticipated that most authors will read the World Wide Web edition and will not require that sort of remuneration. However for the sake of continuity any author in this volume will be given the option of a free sub to the next one or a free volume of back issues instead.

Editorial and Announcements



Editorial



The pressure on printed newsletters from the Internet is growing.



Paper newsletters, such as Fractal Report, Longevity Report, and The Immortalist are very much under threat from the Internet.



These newsletters are not like magazines sold off newsstands to the population at large. On average they circulate amongst those with higher than normal intelligence and literacy. In terms of the area of paper you receive for unit spent, they offer poor value. People buy them, however, for the content. They buy them because this content is not present in other periodicals.



The groups amongst which they circulate are just those groups that are joining the Internet in large numbers.



Fractal Report, for example, costs more than Computer Shopper. Even if you cut out the advertisements in Computer Shopper there is still more about computers in it than there is in Fractal Report. However people buy Fractal Report because they want to read about fractals. Also it is because those who want to publish their points and ideas know that as it is a small newsletter it accepts virtually everything sent to it.



This mechanism has kept small circulation newsletters going for decades - many people who buy them are motivated by the need to see themselves in print, and they do not get accepted by other larger periodicals. Indeed books on how to start a small business frequently make this point. You don't sell your newsletter or fiction or poetry letter to people who want to read it, you sell it to those who want to write. They will often buy multiple copies to give to their friends to show that they are in print! [although Fractal Report did not rely on this mechanism - authors got free copies.]



Now this is changing. The Internet is here, and growing at some 10% per month. The Internet was actually available to the public some years ago. This was when UK telephone calls were still expensive as the service was run by a company privately owned by the government. Even then some cryonicists here announced that they were only interested in communicating with other people on the Internet.



Now Internet connection is very much cheaper. You could get onto it in a limited way by using a second hand computer costing of the order of $100. I have read of projects starting to develop software to enable people to get full facilities using such primitive equipment. [Processor speed is always going to be above the 28.8kb modem speed. A lot of processor speed is required simply because modern software is written in high level languages.]



Also companies are trying to develop "cheap" purpose made Internet communicators that plug into the television and the phone line. [However I have my doubts whether the prices quoted compete with second hand computers a couple of years old - much more sophisticated than the $100 machines mentioned earlier.]



As more and more people get on the Internet, the entire class which subscribed to paper newsletters will get there. It will get there well ahead of the much larger class that buys magazines at newsagents. More and more of their time will be spent on the Internet. To start with they will read less of the newsletters they receive, then they start cancelling subscriptions. Those that produce the newsletters and write in them will devote more of their time to the Internet, neglecting their paper newsletter. The non-financial reward from producing their own newsletter is now fulfilled by interactions with other people on the Internet.



Fractal Report has suffered particularly - many of its authors now are diehards who have taken a positive resistance to the Internet, and these are far fewer in number to days past. Fractal postings on the Internet, because of the nature of the medium, do not translate so easily to paper.



Longevity Report has fared better. It has been able to use articles from the Internet to improve its quality, but in the long term it will inevitably lose its readership. It is there to serve a purpose, and I have to say that I would advise readers to get on the Internet if they can because that purpose can be served better there.



I am also a subscriber to Terra Libra News and I must say that the Internet has made this almost totally redundant for me personally. At the moment they have something like 5,000 members of whom only 100 or so are on the Internet. But as they propagate by multi level marketing and as a company has started that sells computers by the same method, it is clear that most of their "big hitters" will appear on the net as they find out about it. [By the time this appears in print I will have told them.]



The difficulty Terra Libra will then face is that it is much more difficult to get people to pay for information in its pure form as opposed to buying it on pieces of paper. Buying on paper makes you think you have bought a product. You know it takes money to print and mail a newsletter. Whereas if you buy it as a broadcast or download, then you are aware that the costs of making the broadcast or download per subscriber are negligible.



I have already, as manager (ie editor, printer, collator, publisher and mailer) of Fractal Report received a number of enquiries about taking over other paper newsletters in which their managers have lost interest. Already a number of specialist computer newsletters have simply shut up shop and gone away.



There are limits: You can comfortably print and mail about 100 copies yourself. You can get an outside printing firm to print a minimum order of about 200 copies - it may be more now - at a higher but reasonably economic rate. You still mail them yourself. The financial return is about the same as doing 100 copies, and as there is more mailing the time spent about the same. If the numbers rise further you delegate all the handling and your income rises and you time commitment falls to editorial work only. If you achieve this (I never did) you can make a living at it. Some people in England in the 1980s went on to became pound millionaires over it, mainly with periodicals dealing with investments.



The Internet is going to make all this much harder from now on.



Cryonics organisations have a real problem. If they get a potential recruit from the Internet-illiterate, then he will want paper, lots of it. The commitment to a periodical newsletter will be the only hard evidence that his chosen society has a life (unless he is local and goes to meetings). But if most of the society is on the Internet, and he won't or can't join, and there is no newsletter, then he will probably go away.



One answer may be to produce printouts from the Internet. But this has its problems. The Internet's newsgroup style comes out strange on paper without editing, which requires lots of time. This is because the Internet is not only a communications media, it is a sort of time control medium. Although everyone is connecting to it in real time, people answer their email and reply to newsgroups in their own time. It is called asynchronous communication, and it is this feature that gives it its real power over talk in a bar or a telephone chatline.



What happens is like this. Someone posts a message on say 8 January. Most people answer it the next day, but someone else answers it on 3 February. In order for everyone to know what the answer is about, the original message is often reflected. Otherwise those reading on 3 February may not have seen the original post on 8 January, or if they did they have fogotten it.



This feature is very powerful. It enables everyone to "butt in" and have their say in conversations. It also enables people to think and compose their answer carefully. In real time communications those slow of thought appear dullards, even if they are really quite intelligent. Of course it also has its downside - a lot of silly messages also appear, but these can easily be skipped over.



The Internet is the future. Ignore it and refuse to learn to use it, then you are like the child that prefers to play in the sand instead of learning to read and write.



Announcements



New Book, Edited by Clifford A. Pickover



FRACTAL HORIZONS: The Future Use of Fractals

St. Martin's Press: New York. ISBN: 0-312-12599-2.

Publication Date: July, 1996 -- Just published!



Some of you will be fascinated by the latest book edited by Cliff Pickover. It's called FRACTAL HORIZONS: The Future Use of Fractals, and it gives an account of the state of the art and speculates on advances in the 21st Century.



Preface



Part I. Fractals in Education

Chapter 1. Conquering the Math Bogeyman - William Beaumont

Chapter 2. The Fractal Curriculum - David Fowler

Chapter 3. Fractals and Education: Helping Arts Students to See Science - Michael Frame



Part II. Fractals in Art

Chapter 4. The Computer Artist and Art Critic - J. Clint Sprott

Chapter 5. The Future of Fractals in Fashion - Danielle Gaines

Chapter 6. Knight Life - Ronald Brown



Part III. Fractal Models and Metaphors

Chapter 7. One Metaphor Fits All: A Fractal Voyage with Conway's Audioactive Decay - Mario Hilgemeier

Chapter 8. Sponges, Cities, Anthills, and Economies - Tim Greer

Chapter 9. Fractal Holograms - Douglas Winsand

Chapter 10. Boardrooms of the Future: The Fractal Nature of Organizations - Glenda Eoyang and Kevin Dooley



Part IV. Fractals in Music and Sound

Chapter 11. Fractal Music - Manfred Schroeder

Chapter 12. Using Strange Attractors to Model Sound - Jonathan Mackenzie



Part V. Fractals in Medicine

Chapter 13. Pathology in Geometry and Geometry in Pathology - Gabriel Landini

Chapter 14. Fractal Statistics: Toward a Theory of Medicine - Bruce West



Part VI. Fractals and Mathematics

Chapter 15. Fractals and the Grand Internet Parallel Processing Project - Jay R. Hill

Chapter 16. Self-Similarity in Quasi-Symmetrical Structures - Arthur Loeb

Chapter 17. Fat Fractals in Lyapunov Space- Mario Markus and Javier Tamames



Glossary

Contributors



Since the book is filled with beautiful images, a strange array of topics on art and science, and computer/mathematical recipes, it should have broader appeal than most scientific books. The book will appeal to computer artists and traditional artists, computer hobbyists, mathematicians, humanists, fractal enthusiasts, scientists, and anyone fascinated by unusual ideas and optically provocative art.



Some contributors describe the challenges of using fractals in the classroom. Others discuss new ways of generating art and music, the use of fractals in clothing fashions of the future, fractal holograms, fractals in medicine, fractals in boardrooms of the future, fractals in chess, and more. Frequent glossaries should help ease new readers into unfamiliar waters. Most of the ideas expressed in this book are practical and are either currently being implemented or will be implementable within the next decade. The goal is to provide information which students, laypeople, scientists, programmers and artists will find of practical value today as they begin to explore an inexhaustible reservoir of magnificent shapes, images, and ideas.



News of Contemporaries



Recreational and Scientific Computing (REC) announced its completion of 10 years with its issue 74. Dr Ecker tried for one more time to install Windows 95 and made it, claiming now to be a convert. I would agree, although my problem was not a virus scanner corrupting the new system that gave Dr Ecker problems but the poor CMD640x driver with Windows 95. I found it odd that I had to download an update from the Internet since I have an old 60MHz motherboard anyway which surely predated Windows 95.



Most of this issue was taken up with readers letters.



[REC 909 Violet Terrace Clarks Summit PA 18411 USA, $36 pa worldwide, $29 Canada, $27 USA.]



Roger Bagula's colour printed newsletter The Fractal Translight Newsletter July-August issue included an interesting picture that may have been a montage or may have been a generated fractal. If the latter it was interesting as it has a passing resemblance to a television tast card.



His editorial discussed the publishing of books with the message "Science is dead - we know all we can ever know". I get angry about this because if this message is beleived it will mean that humanity will give up its struggle against the death and suffering of the individual. Mr Bagula's objection is that he says his new mathematics will give rise to "new unheard of technologies" and he doesn't want it ignored because of some blanket cessation of new thought and research.



Mathematical "black holes" (iterations that always end up at a specific number) have crept into TFTN from REC and been used as the basis for fractal pattern drawing programs. As usual there are plenty of listings, although some are hard to read as the print is a bit overdone.

Mr Roger Bagula, editor and publisher of The Fractal Translight Newsletter.













Chaotic Justice



by G.T. Swain



with reply from Brian Haines



This rather belatedly is my response to Brian Haines' article entitled Towards Justice. I think he had some interesting points to make, and even though I entertained some minor doubts about the suitability of such an offering for Fractal Report there were some ideas worth challenging.



To compare the mechanisms of either British or American Justice to a computer program does not in my view go far enough. It could be supposed that a legal system has a structure of sorts, but in most other respects it does not follow the disciplines of good programming practise. Neither does it easily lend itself to any of the better known programming languages, or leastways those with which I am familiar.



With the exception of programs relying on self sustaining variables, it is usual to open with a set of declarations about variables, constants, and on occasion libraries of prepared functions and procedures like the "STDIO" of C Programming.



The next step is to identify sets of conditions which it is intended the program should emulate, as a pre requisite for whatever tasks the program will be handling. Somewhere in amongst the code the programmer establishes a working environment and designs activities to be performed within. Finally there are the "Real World" hooks which are intended to enable users to enter data or retrieve information - the interface.



The legal system as it is popularly understood by laymen like me is initially non declaratory. The closest it comes to programming is to impose a set of conditions(laws) which might with extreme difficulty be represented as "IF THEN ELSE" constructs. Problems will however occur because whilst the "IF" component is easy to define, the "THEN" and "ELSE" parts can not often be simply stated. Good programmers know that however convoluted their programs become, essentially they start as a selection of simple alternatives - yes or no, zero or one, etc.



The only workable analogy for the legal system I can find is Fractal Geometry, where the laws may be easily represented in terms of fractals which are themselves the result of combining fractals. The inherent definition being that of a system which is part real and part notional in its representation. I think it would be reasonable to say that much of British Law is predicated on earlier legislation or sometimes on case law where there may be flexibility in the way that supposedly rigid conditions are often varied by human interpretation.



One interesting side issue that occurred to me whilst scribbling these ideas on paper is the concept of "Self Sustaining Variables" Many years ago I first got hooked on computing using a Sinclair ZX81. It will be remembered that this wonderful piece of HI Tech equipment came supplied initially with a magnificent 1K of Random Access Memory. To make the most of this copious space programmers often had to resort to all sorts of tricks to fit working programs into the available memory. So instead of assigning values to variables programmers would employ Boolean Logic in ways that consumed less bytes than the let x= whatever required by Sinclair Basic. Instead of numerical values it was possible to employ the logical values of TRUE, FALSE, PI etc. It is usual for such statements to have arbitrary values so the mathematic value of a TRUE statement could be written as either plus or minus one and FALSE as zero. Similarly using and integer value of PI would produce the value 3. In order to assign the value of 2 to variable x a programmer might include the line let x= INT(PI)-(TRUE) instead of saying let x, a variable previously assigned the value of 3, equal x-1.



Later when I migrated to more versatile systems I sometimes found it helpful to include similar non declaratory values as an aid to program flow, only by this time it could be more directly stated as IF TRUE or IF FALSE statements used to determine which set of instructions were the next to be actioned.



Such devices can also be represented in binary arithmetic as logic gates where combinations of values can be "AND"ed or "OR"ed in order to induce possible outcomes.



Whether or not logic and justice can be combined any where else but within a chaotic system is debateable and I for one would find it extremely difficult to set out in program form, there being far too many variations in the way laws are defined, implemented and interpreted.



Brian Haines replies:



The article Chaotic Justice deals with the problems that must arise in trying to write a computer programme that will emulate a justice system.



In broad terms I have to agree with a lot of what the writer has to say. I know little of the technicalities of computer programming, all I know is that when I switch my computer on I can call up various applications that allow me to perform certain tasks.



I am told for instance that it is possible to get a stock exchange programme that will help to select shares which will give me good returns. I presume some one has worked out all the probability constants and so forth that should make this a feasible objective. For my part I cannot believe it will really do more than remove the more obvious losers. It is illogical to suppose everyone can make money in this way. If such a programme could really work then there would be no need for the Stock Exchange at all. Much the same reasoning must apply to horse racing programmes and the rather absurd Lottery selection programmes.



I also have on my computer a number of Astrological programmes which are supposedly able to give me an in depth analysis of character, they also have predictive capacities. I like to use them as I do a certain amount of astrological forecasting. I am not convinced they actually replace my own instincts for much the same reason as I cannot accept the Stock Exchange programmes as having real validity. On the other hand they are fun and they give a nudge in a certain direction.



So when it comes to law all these problems are there as well. Law is essentially a system of social control which varies with the age in which we live. It cannot be predictive and the variables are so great the best that can be achieved is some form of general pattern. How this can be incorporated into mathematic terms I do not know. Certainly some years ago, before the advent of the computer as a universal tool of general application, there was a Belgian Professor who claimed to have reduced law to a series of mathematical formulae. Together with some couple of hundred other delegates at a seminar at a University conference I listened to his explanation. My French is not good so I could be excused for not fully understanding the theory, however after talking with numerous other highly literate lawyers who had perfect command of the language I discovered no one who was able to explain the system the Professor was putting forward. I tried asking him personally later, I left baffled and no wiser. Perhaps none of us understood mathematics, or perhaps there is an erudition that awaits a more evolved type of brain. At all events I have never heard of the system aqain. It does show however that there have been moves over the years to try and bring logic into the field of law.



It was Pontius Pilate who we know was able to put his finger on the problem of law. "What is truth? " he asked, and that question is still awaiting an answer 2000 years later. We do not yet have a certain answer as to whether justice has anything to do with truth. So for these reasons alone it seems we cannot construct a mechanical programme to predict the outcome of a legal action.



The layman outside the legal system sees it as an unfair set of rules which enrich the establishment at the expense of the individual. There is no doubt the system of legal representation is wholly unfair and needs to be revised. This does not mean the law itself is poor or that some other system of laws will give greater equity.



In all cases there are winners and losers, the losers always say they have been unfairly treated. We was robbed is the cry. The winners always accept the fairness of the system that has given them victory. Never in the whole field of human conflict have the victors turned around and said let us divide the spoils. It is not in human nature to do this.



So how can a computer programme build in such a thing as compromise when the contestants do not believe in it as a solution? If they did they would not be in a legal argument, or before the Courts in the first place.



No one likes lawyers, no one likes the law. There is a mistaken perception that the laws of nature, and the laws of science are the same things as the laws of people. The word law is used in different senses leading to this misconception that a lawyer is some sort of scientist working in a field of exact quantifiable rules. As anyone who comes in contact with a problem quickly comes to realise, there are often no clear answers. Take for instance that staple of legal problems the boundary dispute. It seems so easy to show that on the map a line runs from so and so to somewhere else. It has to be clear beyond peraventure that either one party or the other is in the wrong. It should be a simple matter to walk around the boundary and see where the fence lies. If it were that easy there would not be so many cases upon the subject.



In a boundary dispure there is often a map, a fence, common usage and long time memory. Every one is deficient in some respect. How can a computer programme take account of these variables with any degree of fairness? For a lot of the time no one cared where a boundary ran. It didn't matter because no commercial or other great interest was involved. Then suddenly values change, perhaps oil is found, or water or a new building scheme has made the few feet in dispute a matter of contention. The real problem lies in the fact that no one knows exactly where the boundary lies at all, it was never properly defined in the first place. And so thousands of pounds are spent trying to establish a right that was never there.



Perhaps the answer is human greed knows no bounds. The other person is always in the wrong, everyone knows that. So what we have to do is always carry our own personal computer programme about that shows we are in the right and then find another programme to evaluate the comparative strengths of both programmes; theirs and ours.



Book Review:

VISUALIZING BIOLOGICAL INFORMATION

Clifford A. Pickover, Editor.

World Scientific, Singapore, 1995.

ISBN 9810214278, hard cover, $64.00



by Dr Gabriel Landini



I must say that this is a fascinating book about a fascinating subject: new ways of representation of symbolic sequences of nucleic acids (DNA, RNA, gene mapping) and proteins. DNA (deoxyribonucleic acid) is the molecule that stores the information (called the genetic code) in organisms as a series (sequences) of 4 bases coded A,T,C and G. This information is written with these "characters" forming "words" of 3 letters. The words code for specific aminoacids which are, in turn, the building parts of proteins. These information blocks are organised into larger functional units called "genes". Our genetic code consists of 50000 to 100000 genes of 2000 to 2000000 bases each.



Certainly many difficulties arise when all this massive information has to be understood or represented in some way. Visualisation techniques originate to overcome these problems and to allow the disclosure and identification of interesting or useful "patterns" in the sequences.



The book consists of the editor's preface and a collection of 15 contributions written in scientific journal style by experts on the subject. There are two impressive colour plates and many black and white illustrations and diagrams and all the contributions include a "glossary" to clarify some of the scientific terms to the readers The preface is an introduction to visualisation, its objectives and examples. It includes about 10 pages of literature references on computational biology, computers and DNA, genetics and music, genetics and fractals, and visualisation. There is also a comprehensive list of genetic and biological data repositories with Internet addresses, a list of electronic newsgroups (BIOSCI), a list of USENET newsgroups addresses, and other Internet resources of interest.



The chapters deal with a variety of problems such as graphical representations to disclose "patterns", whether a sequence is "random" or not, taxonomical classifications, how to represent the 3-dimensional structure of proteins in two dimensional space and how to quantify similarities between sequences. Some of the contributions go beyond the description of the algorithms and methods and include computer code. For example "A Transforming Function for Generation of Fractal Functions from Nucleotide Sequences by J. Campione-Piccardo has a Pascal implementation of his method of barograms. Gene Music: Tonal Assignments of Bases and Amino Acids by N. Mukata and K. Hayashi is a paper describing DNA to musical notes transform that may help to make sequences more comprehensible. Their chapter includes 3 music scores produced from DNA data and a script for "Hypercard" and "HyperMIDI 2.0" (Macintosh computer). Some of the algorithms to "visualise" the sequences are in principle simple and at the same time very powerful; with some computer skills one may be able to reproduce or implement the methods described.



Who is this book for? I think that it will be of interest to a broad range of readers, from biochemists to molecular biologists, computer and computer graphics scientists. It may also appeal to computer enthusiasts as some of the algorithms described may also be applied to other symbolic sequences such as texts or codes. The type of graphic representations shown in the book may not be esthetically as eye catching as some graphical rendering in the previous titles by Pickover (The Pattern Book or Computers, Pattern, Chaos and Beauty) but we must not forget that the main purpose of the book is to present techniques to make sense out of biological data, while beauty is in the eye of the beholder.

What is chaos?



by Yvan Bozzonetti.



We think of chaos as the sensibility to starting conditions in rather simple physical systems. Chaos creates unpredictability in deterministic systems, it goes hand in hand with fractals and its scale invariance property. Chaos is associated with nonlinear dynamics and because quantum mechanics ( QM ) is linear, there is no quantum chaos, at least in basic QM. Chaos is a "short time" phenomena: At the root of physics, quantum mechanics is the rule and so there is no chaos. Simply, for macroscopic objects such planets or boulders the QM time constant is so large that we can't see quantum effects, our Universe is simply too young!



The incompatibility between chaos and QM seems not too disturbing at first, after all, the Relativity-QM bad fitting is far more worrisome at fundamental level. In fact, as we'll see, the chaos incompatibility extends to Relativity and even classical physics at the point to render it incompatible with itself!



The Relativistic case is rather simple: The problem arises in Relativistic systems with more than one component. Because of the finite celerity of light, we can't work out them as n-points systems simultaneously and instantly interacting. Extended systems transform from one coordinate system to another with derivatives, something unknown in fractals.



Thermodynamics in classical physics encounter the same problem. The practical answer to such problems holds in something as: Well, if derivative is not allowed, use differentials. After all, we know QM solve anything at very small scale. So, file and forget the problem.



Then came the classical physics incompatibility. Everybody start to study it with Newtonian formalism where nearly everything come from the equation:



Force = Mass x acceleration.



This way of doing things produces very bad and cumbersome calculations for even fairly few interacting objects, so Lagrange was the first to suggest another approach starting with the difference: kinetic energy minus potential one. This quantity is now called the lagrangian L. Hamilton produced another computing road, starting with the sum : kinetic + potential energy, the Hamiltonian H. Then came Jacobi with its Jacobian J, Poisson with its brackets[] and modern theoreticians turning Hamilton's formalism into four classes H1, H2, H3, H4 of continuous parameters. Poisson's brackets was similarly extended into an infinite set of bracket systems.



You can do classical physics in any formalism of your choice: Newton's N, Lagrange's L, Hamilton's Hx, Jacobi's J or Poisson's []. When Schrödinger read about Planck's work and the new constant of action (energy x time) h, he took classical physics in Jacobian form, put h in it and got its wave equation. Richard Feynman found the new mechanics too cumbersome to learn, so he found simpler to reinvent it with path integrals, a form of lagrangian formalism. Most physicists today learn physics with Hamilton's system so they use H or Hx if the problem needs its own formalism. There has been last year an unified classical-quantum approach based on Poisson's bracket. Depending on the taste of the user and the problem at hand, one formalism or another can be put in use, all of them give the same final result, they are different way to represent the same reality. At least that is the general opinion...



The problem comes from the conversion from one system to another, there derivatives are an essential part of the process and no differential approximation can't be tolerated. Each system: N, L, H, J, [], produce its own world, worst: H and [] are infinite family of systems !



In retrospect, thermodynamics and relativity could be fitted with the same formalism and each form would turn out to be incompatible with each other. Why not then recognize the basic incompatibility of chaos with these physical domains for what it is : A basic property of physics. To take each chaotic reference frame as inconvertible into any other is not worst than the four families of Hx systems.



H. Weyl was the first to point to the possibility of a scale transformation from place to place in space, this was its gauge theory of the Universe. That formalism was then turned into a phase shift in field theories to give birth to modern quantum gauge fields. Taking a step back, we can bring back the initial Weyl's idea and squeeze anything with fractal scale invariance process so that derivatives are resurrected in a chaotic world. Simply, when going from one reference frame to another in Relativity or thermodynamics, we must add a Weil's gauge transform. This is true too in classical domain when turning from L to H or J for example.



Can we compute with chaos ? Most often, physics is concerned with monostable systems. When a system displays two possible stable states, it is said to be bistable, the most basic function of all computing device. Now, monostability and bistability are only the first two steps in an infinite set of 2N stables systems. A four stable system could process two bits simultaneously for example. When N get the infinity, we have chaos, that is, the possibility to process an infinite number of bits at the same time in the same device. N can be seen as the number of dimensions in a parameter space, Taking a subspace n of N with a finite number of dimensions, we can control the output of a device working on N. That is, we can introduce and extract information in a chaotic computing device. There is some hints about brains would rest on chaos computing for some of their functions.



As superposition of states in quantum computers, chaotic computers could process virtually unlimited amount of information in a small device. The possibility to use Weyl's gauge transform open the way to really astonishing possibilities: For example a Weyl's gauge field can produce a divergence in empty space. A field divergence is a source for that field. For a distant observer, not taking into account the Weyl's gauge property of the field, there would be something springing out of nothing. Eventually, all the field structure of a chaos computer could be moved or duplicated away. Away here, must be understood as at another N level (more than one thinking at a time), at another gauge (more than one function in the same device), at another place (teleperception, teleoperation, teleprocessing), at another spatial scale (micro-macro-actions), at another time (divination?, time travel, back-time actions), at another space-time-quantum variable, nearly whatever it is: Spin for super symmetry, impulsion for teleportation, angular momentum to turn tables or anything else...



What will dominate the world tomorrow? Chaos computers or their foe, quantum devices? The one as the other can build their Universes and rebuild our own...

Can Computing Devices

Beat Physical Laws?



by Yvan Bozzonetti.



We think of chaos as the sensibility to starting conditions in rather simple physical systems. Chaos creates unpredictability in deterministic systems, it goes hand in hand with fractals and its scale invariance property. Chaos is associated with nonlinear dynamics and because quantum mechanics ( QM ) is linear, there is no quantum chaos, at least in basic QM. Chaos is a "short time" phenomena: At the root of physics, quantum mechanics is the rule and so there is no chaos. Simply, for macroscopic objects such planets or boulders the QM time constant is so large that we can't see quantum effects, our Universe is simply too young!



The incompatibility between chaos and QM seems not too disturbing at first, after all, the Relativity-QM bad fitting is far more worrisome at fundamental level. In fact, as we'll see, the chaos incompatibility extends to Relativity and even classical physics at the point to render it incompatible with itself!



The Relativistic case is rather simple: The problem arises in Relativistic systems with more than one component. Because of the finite celerity of light, we can't work out them as n-points systems simultaneously and instantly interacting. Extended systems transform from one coordinate system to another with derivatives, something unknown in fractals.



Thermodynamics in classical physics encounter the same problem. The practical answer to such problems holds in something as: Well, if derivative is not allowed, use differentials. After all, we know QM solve anything at very small scale. So, file and forget the problem.



Then came the classical physics incompatibility. Everybody start to study it with Newtonian formalism where nearly everything come from the equation: Force = Mass x acceleration. This way of doing things produces very bad and cumbersome calculations for even fairly few interacting objects, so Lagrange was the first to suggest another approach starting with the difference: Kinetic energy minus potential one. This quantity is now called the lagrangian L. Hamilton produced another computing road, starting with the sum : kinetic + potential energy, the Hamiltonian H. Then came Jacobi with its Jacobian J, Poisson with its brackets[] and modern theoreticians turning Hamilton's formalism into four classes H1, H2, H3, H4 of continuous parameters. Poisson's brackets was similarly extended into an infinite set of bracket systems.



You can do classical physics in any formalism of your choice: Newton's N, Lagrange's L, Hamilton's Hx, Jacobi's J or Poisson's []. When Schrödinger read about Planck's work and the new constant of action (energy x time) h, he took classical physics in Jacobian form, put h in it and got its wave equation. Richard Feynman found the new mechanics too cumbersome to learn, so he found simpler to reinvent it with path integrals, a form of lagrangian formalism. Most physicists today learn physics with Hamilton's system so they use H or Hx if the problem needs its own formalism. There has been last year an unified classical-quantum approach based on Poisson's bracket. Depending on the taste of the user and the problem at hand, one formalism or another can be put in use, all of them give the same final result, they are different way to represent the same reality. At least that is the general opinion...



The problem comes from the conversion from one system to another, there derivatives are an essential part of the process and no differential approximation can't be tolerated. Each system: N, L, H, J, [], produce its own world, worst: H and [] are infinite family of systems !



In retrospect, thermodynamics and relativity could be fitted with the same formalism and each form would turn out to be incompatible with each other. Why not then recognize the basic incompatibility of chaos with these physical domains for what it is : A basic property of physics. To take each chaotic reference frame as inconvertible into any other is not worst than the four families of Hx systems.



H. Weyl was the first to point to the possibility of a scale transformation from place to place in space, this was its gauge theory of the Universe. That formalism was then turned into a phase shift in field theories to give birth to modern quantum gauge fields. Taking a step back, we can bring back the initial Weyl's idea and squeeze anything with fractal scale invariance process so that derivatives are resurrected in a chaotic world. Simply, when going from one reference frame to another in Relativity or thermodynamics, we must add a Weil's gauge transform. This is true too in classical domain when turning from L to H or J for example.



Can we compute with chaos ? Most often, physics is concerned with monostable systems. When a system displays two possible stable states, it is said to be bistable, the most basic function of all computing device. Now, monostability and bistability are only the first two steps in an infinite set of 2^N stables systems. A four stable system could process two bits simultaneously for example. When N get the infinity, we have chaos, that is, the possibility to process an infinite number of bits at the same time in the same device. N can be seen as the number of dimensions in a parameter space, Taking a subspace n of N with a finite number of dimensions, we can control the output of a device working on N. That is, we can introduce and extract information in a chaotic computing device. There is some hints about brains would rest on chaos computing for some of their functions.



As superposition of states in quantum computers, chaotic computers could process virtually unlimited amount of information in a small device. The possibility to use Weyl's gauge transform open the way to really astonishing possibilities: For example a Weyl's gauge field can produce a divergence in empty space. A field divergence is a source for that field. For a distant observer, not taking into account the Weyl's gauge property of the field, there would be something springing out of nothing. Eventually, all the field structure of a chaos computer could be moved or duplicated away. Away here, must be understood as at another N level (more than one thinking at a time), at another gauge (more than one function in the same device), at another place (teleperception, teleoperation, teleprocessing), at another spatial scale (micro-macro-actions), at another time (divination?, time travel, back-time actions), at another space-time-quantum variable, nearly whatever it is: Spin for super symmetry, impulsion for teleportation, angular momentum to turn tables or anything else...



What will dominate the world tomorrow? Chaos computers or their foe, quantum devices ? The one as the other can build their Universes and rebuild our own...



That seems impossible, computing is at most a world representation, not the world itself, at least so is the common thinking. The reality may not be so clear cut as I try to show here in the thermodynamics domain. Thermodynamics or statistical physics rests on three laws:

The first law is simply the energy or mass-energy conservation, it is the hardest law to crack.

The second law is about nondecreasing entropy in any transformation, it is often taken as the root of time arrow: at elementary level all physics laws seem able to run towards the past as well as the future, only entropy expansion forbids a backward time.

The third law is the zero level of entropy at a temperature of 0 Kelvin for crystalline matter.



First, I look at the 3rd law and its loopholes. The first comes from spin: even perfectly organized arrays of atoms may have a large disorder or entropy at 0K if their intrinsic angular momentum or spin point in different directions. To get a fusion reaction in a nuke for example, all atoms must be in the same "quantum space" with lined up spins. Cooling is not sufficient to get that and supersonic compression by the X-rays of a fission device is used at this end. Entropy is defined as the log of the number of states allowed to an atom in phase space (The 6 dim. space made from 3 dim "ordinary" space + 3 dim momentum coordinates). When an object is compressed there is less room in ordinary space but because entropy can't be reduced, the number of states or volume can't be diminished. That is, space compression generates momentum coordinates expansion: Atoms move or vibrate faster and temperature goes up. That is true for "slow", ie subsonic compression, for supersonic shrinking, individual atoms can't get from shocks with nearby objects the boost to expand their impulsion domain. At first, it would seems the 3rd law is broken in this case. Curiously, that not happen: atom drawn momentum from spin disorder so that spins get oriented in the same direction ( and are cooled), when at the same time atom heat up to maintain their entropy.



What if you compress at supersonic speed atoms with oriented spins? I don't know about that particular experiment, but the outcome is fairly predictable: We would borrow entropy from another sublevel of matter, the most probable candidate be quarks and gluons inside nucleons. Because gluons create similar gluons continuously, there is an endless entropy spring at this level and we can't go further down.



To save the second law: entropy never decrease, we have destroyed the third the zero level at 0 degree Kelvin. Special Relativity opens another door on entropy laws. Because the celerity of light is finite the six dimensional phase space is limited at a single point. To extend it at a finite volume we must take into account as many 6 dim spaces as there are points in a volume. Because the answer is: there are infinite many points in any finite volume, Relativistic thermodynamics uses infinite dimensional phase spaces. If conditions inside an object turn relativistic, entropy is diluted along infinitely many dimensions; that is, it falls to zero for any subspace with a finite dimensional number. This is the ultimate relativistic fridge. We save here the 3rd law, but what about the second?



Recently, sonoluminescence, light produced by sound waves in liquids, has been ascribed to imbalance in quantum fluctuations of the electromagnetic field : When gas bubbles produced by cavitation collapse, refraction index in their wall move so fast that classical physics can't keep the rhythm. This is similar to supersonic compression in fusion device. Simply here, entropy don't comes from spins, it spring from uncertainty in the E.M. field.



The general picture we have get summarize as follow: The thermodynamics third law is good only in classical physics domain. If there is relativity (spin, infinite dimensional phase spaces,...) or quantum fields (electromagnetic, gluon colour,...) the entropy ground level at best is displaced or vanish.



All of that is about twist between thermodynamics laws, not about computing, it is here only to recall what we call physics laws is not as clear cut as physics courses would say.



Entropy is often represented as disorder and computation or information as its negative. In a simplest form, introducing computation in a closed system must reduce its entropy. Because computation on quantum or chaotic systems can be done at nearly no energy cost per operation, producing a given amount of computation must not generate its own entropy to compensate. This is a purely theoretical statement with no practical hints about how to implement it on a real system.



Assume we do computations with quantum EM field ( quantum computer), we can cancel all classical entropy, but what about quantum one, the part seen in sonoluminescence ? It dwells on the same ground as the computer and can't be cancelled by it. Here, computing is an entropy shuttle, moving it out of classical domain and into quantum one, it does the reverse of what is seen in sonoluminescence where entropy flows from quantum to classical sector. Special Relativity does a similar job when it dilute classical entropy in infinite dimensional phase space. There is the first intellectual result: Relativity works on classical physics as a computer, there is the possibility to have relativistic computers with tremendous power even if we know nothing today about the technology of such a machine.



One of the first thing we learn in a course on thermodynamics is about reversible (idealised) and irreversible (real) process. To solve thermodynamics problems, irreversible processes are split into "near reversible" ones. With the background we have now at hand, a question arise: Are there really irreversible processes or are they simply reversible one with an entropy input from another space, for example EM quantum field? A clue can be found in textbooks: all processes are reversible (don't expand entropy stock) if they are done on an infinite time span. All finite time duration produce irreversible process. We have seen with sonoluminescence an extreme case: A very fast process extract entropy in the form of thermal photons from quantum electromagnetic fluctuations. Such process have no cut off speed, so all finite duration thermodynamics process will extract entropy from quantum field. This is nothing more than the entropy responsible for irreversibility in finite time experiment. Entropy don't get larger, it flows from quantum to classical world. Now, recall that entropy expansion is responsible for the time arrow. If a computer shuttles entropy back from classical to quantum domain, it can slow down time, freeze it or get its classical part backward.



A gravitational field, in General Relativity slow down time too. If there is a gravitational gradient in a closed system at equilibrium with maximum entropy, the time lag induced by gravitation produces a red shift so that a part of the system looks cooler than another. Some free energy may then be extracted from a closed system at maximum entropy. Gravitation beats the second law to save the first. In classical domain, computing pumping out entropy into quantum fields would looks as a post-Newtonian gravitational field, ie a linearized part of General Relativity. When computing pumps as much entropy that what is created by other thermodynamics processes, we would have the equivalent of a black hole horizon. If we pump even more entropy so that its sum is reduced, then we would have the negative of a black hole (a BH seen from inside) that is a white hole.



The physics of computing is certainly a extraordinary subject with time machines, classical worm-holes, black holes, horizons, systems able to fool all thermodynamics laws in classical Euclidean space, and so on. But what means computing in this context? Here, computing is equivalent to negantropy or information, or organization as opposed to chance or randomness.



To illustrate that, assume we start with a Maxwell's demon. The demon looks at atoms with a statistical distribution of energy in a maximum entropy closed system and sort them out according their momentum. In one bag the demon put hot atoms and in another he pack cold ones. A thermal motor may then be run between the hot bag and the cold one. This contradict the second law of thermodynamics because work is extracted from a system at equilibrium. The answer to the puzzle holds in that: to measure energy in each individual atom, the demon must precisely pay an energy bill equal to what can be extracted from these bags. The second law is rescued! On the other hand, gravitation is smarter than any demon and we have seen it can do the job without initial expanse.



Assume now there is a computer at the demon place. The closed system contains billion of billions of atoms and the computer ascribe a grid node at each atom in a representation of momentum space. The state of a atom at a given instant is correlated with the state of nearby atoms , because there has been some collisions between them. We have so a continuous field of momentum of atoms in the system. Assume we measure pressure at atomic scale on the walls of the system. Pressure is the result of atoms impact on walls, or in another way a measurement of atom impulsion. As in the demon case, that knowledge has an energy price and we must pay for it. Now comes a mathematical theorem stating in brief: In a field with distance inverse square law, the whole field can be computed everywhere if it is known on its boundaries. Now we have to pay an energy bill for one atom in one billion or so and with that we know the energy of each atom so we can sort them out without further measurements.



The real work unfolds that way:

A) make a measurement on some atoms, say one million of them.

B) Drawn a line between each atom pair and compute for each line what must be the momentum distribution for a given law of distance (metric).

C) Take some atoms on each line, say half a dozen and look if they fit into the predicted value of momentum for their position. Surely they do not in the general case because the momentum field is not of the inverse square kind for the adopted metric.

D) adjust the metric to get the answer. Go back to C) and refine the metric, when it is good go to next step.

E)Take each line as the boundary of a surface and compute momentum field for the full surface.

F)Sample some atoms in the surface and look if they fit to the predicted value, if not refine the metric and sample anew.

G)Take surfaces as boundaries of volumes and compute momentum for the full volume, sample some atoms, ...



If we are in a semi relativistic system space may have far more than three dimensions. In the sampling process of atoms, we can look at only one or two momentum coordinates so that computation may proceeds on four or five dimension in phase space. In the full process we may sample some 108 atoms and know about 1024 of them (near a gram of matter). Knowing that and what we want after a given time, we can back compute the wanted state with some corrections so that it fits the real state. Introducing reverse correction in the evolution of the real state, we will bring it to the wanted state. Corrections are punctual input of atoms or energy with a given momentum spectrum.



Assume you want to run an object backwards in time for one century, you start to define its actual state and run it in reversed time. Because atoms are quantum objects there is some uncertainty as what exactly the state at a given epoch was. Now, if your object is a car, two year ago it was not a boat. Knowing a car is a car today and two years ago, you can filter out irrelevant solution for one year ago. If you know your car was a car 120 years ago, you can work out its state one century ago with a good precision. Your 20 years margin gives plenty of time to filter out all probabilities giving anything else that the wanted car.



On a quantum ground, all atoms with identical quantum numbers are indistinguishable, in practice and theory. If you have lost some atoms in your car (iron, carbon...) you can add similar iron, carbon... atoms in any known quantum states and bring them to the state of the missing atoms in due time.



What is yesterday? is it cancelled forever or is it buried in quantum entropy? Can we send back newly entered entropy in classical domain to its quantum domain and bring back past? Can we slice out entropy with computing so that we can build virtual Universes, one for each epoch? (second, day, year,...)? Can we build billions of computed Universes, then choose one and seed the past with corrections so that the real world evolve to give a carbon copy of the selected computed world? 1 000 years from now will man would have ever existed or was it a mere quantum probability buried in unclassical entropy?

Impact of computing power.



by Yvan Bozzonetti.



Computing power is the fastest growing commodity, today high end PC are as powerful as supercomputers one generation ago. Using long word instructions, super pipeline, vector processing, multiprocessors, asynchronous technology, Peltier cooled chips made from germanium-silicium it seems we have some power in store. When line drawing will fall beyond .1 micron, the ballistic transistor technology will boost clock speed up to ten times. After multimedia, surround sound, full screen HDTV, relief and virtual reality, what will come next?



The answer seems to be quantum computing and there everything scale up at tremendous speed. Today most PC work on 64 bits in a row and graphic card for workstations use 128 bits word. With state superposition, a quantum computer using 64 bits would process 264 operations at the same time or something as 18 billions of billion. With 128 bits that would amount to 3.4 x 1038 operations done simultaneously, all of that in one billionth of one second, on monstrous numbers in the space of a chip small enough to be implemented on a brain.



Direct brain link may be the other big revolution in computing. Today, snail neurons can be convinced to grow on chips, the same trick could open the way towards an implemented computer-brain interface. The coming age will be not the one of robots, it will be the dawn of superbrains with unthinkable processing power on prosthetic quantum computer. The first outcome will be to turn any brained animal into a civilized-able individual. One century from now, Earth may be the home of an ecological civilization counting at least thousand species, from whales to mouses. Because data exchange will be handled by computers, there will be no communication boundaries, a mouse will be able to "talk" to a finch and so on. there may be too purely artificial brain with no living counterpart or built on copies taken from different living brains. One consciousness could run more than one body...



All of that is simple stuff, the next idea is more disturbing: It deals with artificial Universes. From quantum mechanics, we know the smallest thing is near 10-33 cm long, from astrophysics, the biggest is near 1028 cm, that is the Universe is 1061 cm long. Similar calculations give an age near 1061 "elementary unit of time" (Planck's time). The four dimensional "volume" of our Universe is so 10244 or 2813. That is to say, a quantum computer processing words 813 bits long could simulate our entire Universe from its first second to present day, from here to the most far away galaxy with a precision 100 billion of billion better than the classical diameter of the proton. It could "refresh" its "picture" every 10-43 second! In fact, a quantum computer could simulate a far larger Universe than the one we see.



An interesting add on would be a 5th dimension with the law: Any of the first four dimension and the 5th gives a subspace with projective plane geometry. That would allow instantaneous space travel at any distance and time travel at any epoch. The cost on the quantum computer would be a mere 1061 bits or words with 183 more bits.



Another option would be the dual theory, a limited form of supersymetry. In dual theory, quarks (matter) inside protons and neutrons turn into gluons (energy) and gluons into quarks. That redistribute all masses in the Universe in a different way for each proton or neutron. We call that the tangent Universe hypothesis. It may be turned to "reality" in a simulated Universe. There are some 1078 baryons (protons, neutrons) in the visible Universe, dual theory would create as much new Universes. The cost on the quantum computer would be 234 bits only.



The electro-weak force field may be dualized too: Each electron would turns into photon and photon, W+, W-, and Z0 particles into electrons. Because there are some billions of photons for each electron, the electroweak dual world would be some 109 times larger and massive than our own observable cosmos. The quantum cost: 1078 x 109 = 1087 bits or 261 more bits in the computer word.



Summarizing all of that, you have a Universe as big as you can see in the best telescope, a 5th dimension, one tangent Universe for each baryon and for each photon in each tangent Universe a new tangent space-time-5th-dim. What you need to get that is a quantum computer able to process 813 + 183 + 234 + 261 = 1491 bits. That need far less component than what is integrated tody on a single microchip, recall all of that needs only a single computing cycle, may be less than one nanosecond long.



I come now to wild ideas: Thinking Machine Corp. has put more than 65 000 processors on a single computer, what would look like a similar system with quantum technology? It could process words made from 32 x 65 000 bits or more than 2 M bits. can you think about such Universes?



Are we in a computer simulation run by a not so advanced civilization? Hard to say: The 1491 bits long simulation seen before is far larger than the demonstrated Universe, only a experiment testing for the number of dimensions would give an answer: It seems only superstrings give a coherent picture of physical laws and that theory asks for eleven dimensions for particle of matter and 26 for energy (not the same). Only a photon statistics test conducted at Planck's scale can give the answer. It seems photons produced in sonicated water by bubble collapse have indeed a Planckian origin. Testing that light with a photon counter may be our sole way to answer that question. (If the simulation don't run on 4 + 26 x 11 = 290 dimensions!).



What if someone in the simulated build a quantum computer? After all, the simulation includes quantum laws... What about quantum computers in that simulation of simulation?...



But are such world simulations? Virtual reality on classical computer are one form of advanced picture. Quantum simulations may be different. Quantum superposition of states is a physical law. The fact we exploit it don't create it and so, in some way all possible quantum Universes are pre-existing as quantum virtual possibilities. What make the quantum computer is to give us a door toward an organized part of the quantum reality. It is no more artificial than the Earth surface conditions in a space station or living in winter with clothes in an electric heated house.



I have said a projective plane geometry on some extra dimensions would allows time travel. If quantum-computed are a pre-existing reality, the time travel could go backward not only before the current computer run, but before the computer creation itself.



I have assumed up to now a digital quantum computer technology, what about analogue computing? After all, we use electrical currents in salt water to model water flow or wind tunnel to look at aircraft wings design. Why not an analogic part on the quantum computer? The mathematical operation of eversion put everything is out inside and everything is inside out. This is illustrated by the inverting Lion's trap: You enter a solid cage put in a desert, you make an inversion, you are then outside the cage and the lion is inside. Can we copy everything on Earth down to the quantum state of each particle and put it, using quantum eversion inside a computed Universe? An analogic function on a quantum computer could, at least in principle do that. The whole Earth could be copied as a routine process every second, day or century. A single quantum-computed Universe could contain a near unlimited number of copies.



What about a civilization in a simulated Universe running on a computer with eversion function? It could back travel in time before the computer creation a get out at our epoch or in the dinosaur world, even if time travel is forbidden in our Universe! Indeed, we can conceive any physical law in a quantum computer and use it off limits in any Universe, including our own basic world, with the help of an eversion device.



Do you want travel faster than light? Don't bother about its possibility! Goes to a quantum simulation tailored to allow it, use it off limit with eversion and you get instantly where you want to be, whatever the distance. Quantum computing is really far more insane than you could think.

The Statute of virtual worlds.



by Yvan Bozzonetti.



Virtual reality is the buzz word today. To be effective, it needs computers in the 30 Giga floating Point Operations per Second (flops), something 2 or 3 PC generations beyond the current Intel's Pentium-Pro. Virtual worlds are a quite other matter, they start at a full Earth-like simulation, down to one proton radius, for ten billion years with a clock speed near 1021 cycle per second or: 10105 operations. That put the computer speed in the 10100 operations per second, very far from current classical computers.



Four or five technologies seem able to get the mark in the next century, all use very high parallel processing. The first one is the second quantification photon computer, here nonlinear interactions between photons take the place of electric currents made from electrons or holes. The first quantification photon computer is, may be, 20 years from now and uses polarization, propagation states and energy multiplexing to do many simultaneous operations on different photons in the same material device. The technology limit is one operation per photon, something implemented in the very few firsts generations, may be indeed in the first one.



Because there are no 10100 photons in the observable Universe, that technology don't fit the bill. On the other side, array of interferometers can slow down the light speed and expand at the same time the number of dimensions used by the wavefunction. In fact, the wave propagate in a Hilbert space with an infinite number of dimensions but only a finite subset can be accessed. In the radio domain, resonators can keep a wave for up to 1010 cycles or as much interferometers (reflection on a cavity wall amount to destructive interference of the wave with itself outside the cavity). It seems light traps in the far infrared can do a similar job now. Because each reflection amount to a wave division by two, each part in the left or right boundary of a space with n + 1 dimensions, we have a 5-dim. wave after one reflection, 6 dim. wave after two and so on.



Assume we use 10-3 electron-volt photons (in the microwave domain) and have 1.6 kW photon power. This gives 1025 photons. Each of them must then do 1075 operations. In N dimensions, there are N(N+1) polarization states, for N large, this is nearly N squared. Each such polarization projects itself in one dimension ( or 3, because here N >> 1 or 3) into 2N "shadows". We have then N2.2N "channels" to do an operation with a photon. For large N this is near 2N because N2 becomes negligible in this case. Now, 1075 is near 2225 so we need some 225 internal reflections in the photon processing device to get the required computing power, this seems ridiculously few against the 1010 seen before. Second quantification photon computers are very good it seem at producing virtual worlds, even worlds far bigger than the minimum single planet seen before. They could simulate the whole observable Universe with many fields. If we entered in such a simulation, we could have a hard time to prove it is a mere advanced picture.



Classical electronics computers may not surrender: All current digital electronics use bistable gates or physical systems with 21 degenerate levels (two levels at the same energy), this can be seen as a first step on the chaos road with 2N degenerate levels ( chaos comes into play when N becomes infinite). A "near-chaos" gate would use N in the 10 - 100 range so it could process from 1 000 to 1030 or so bits at the same time. Going to N = 300 would bring the computer in the world simulation class. We could call such systems semi quantum computers: A single electron wave would travel in the device at a time and use the uncertainty principle to take all the open ways simultaneously. Contrary to the quantum computer case, the word length has no effect on the processing power. Here, only electrical currents are quantized, not information itself. In the same way, gates are near chaotics but quantum wave functions are not, this it a curious mix between classical electronics with near chaos properties and quantum systems without it.



Single electron transistors are a reality at laboratory level and chaotics electronics devices are common place. To put the two properties in a single system and build logic functions from it is mere R & D in the line of current technology. If I would place a bet on a supercomputer technology in the next century I would choose this one: Silicium is not death!



There is an enormous factor near 1090 between ordinary virtual reality and full blown world simulation. No doubt that desert will be paved in due time with relevant applications we can barely see today. From refined VR to uploading, from nanotech supervision to thermodynamics reversal of "irreversible processes", from brain booster to collective consciousness....



Most social activities must fall in these worlds well before the era of full Universes simulations. The "real" world may be then no more than a veneer on a more profound social reality. Some people may even choose to live entirely in a virtual world to meet Star Trek society or anything else in a Universe with distorted physical laws. If you like flatland, you could have a single flat world extending for billions of light years. What would be the capacity of such a civilization to grasp the intricacies of what we call real world? Would reality be left to some sublevel life form instructed to maintain and build the support system of energy supply and computers?



This may be the first branching off in the civilization history: A major part of the society dropping out of (our) reality to follow its own course.



Quantum computers may have at even a higher level similar capacities to produce branching out of classical reality. On the other hand, analogue devices can give a way back from quantum computed worlds to our current cosmological domain.



Chaos computing with Weyl's gauge transform may be the ultimate in computed worlds linked to reality. At that level it becomes difficult to say if we have artificial Universes or a technological fitting to a preexisting reality. Weyl's gauge can be applied to classical thermodynamics, Special Relativity or classical physics representations ( Newton's, Lagrange's, Hamilton's, Jacobi's, Poisson's ones).



Beyond VR, computer boosted brains, uploading, full society uploaded with VR for each individual, there is the gigantic domain of virtual worlds where simulation goes down to elementary physical laws. When that will come true? Given a 2.5 years period doubling for computer power and putting the current limit at 1012 flops, a 1088 expansion must takes some 730 years. This must be taken as a upper limit because new technologies such ballistic transistors or near chaos device must induce enormous boosts from one generation to the next in few years. I would bet on two to four centuries at most as the evolution duration. this is long for current human life but not too large if we take into account a technology such cryonics. Cryonically suspended people may have a hard time in a second life period defining in what Universe they are.



On the other hand, computed Universes may be the final solution for overpopulation: Give a double life to everyone: One in the basic world and another in a computed Universe with a continuous link between them. Raise children only in the computed domain, put on ice 99% of the time the Earth bound body and world population fall to a sustainable level. Strangely, if we want to save coming generations from overcrowding in a too small world, we must use and perfect cryonics today. The computer part of the project is well launched and needs only to continue on the current road.



What tomorrow will be made off? Cheap space travel with exotic physical laws as think most sci-fi authors? A more probable outcome would be a civilization ruled from computed Universes by computer boosted brain creatures, few of them with a link to man. Some mouse-like species with extended longevity and fast reproducing capabilities would be in a better position to win the first place. With 1010 galaxies in a computed Universe, 1011 stars per galaxy, 104 Earth surfaces on planets of a Sun and 1013 mouses per Earth, the total population could amount to 1038 "people". With four months for each generation, going from 2 to 1038 would need only 43 years. If there was a way back to our Universe, such a "technology-boosted" species could invade everything, up to the most remote observable part of the sky, in a single invasion wave. Always eager to meet extraterrestrial civilizations with similar capabilities?



Bagula's Programs



Here follows a selection of BASIC listings and their images as sent in by Roger Bagula on a disk dated March 1996. Web readers should see them in colour.



PRINT " INPUT partition: try 640 for slow 64 for fast:"

INPUT M

PRINT "input general power:Z'=Z^S+C"

INPUT s

SCREEN 12

E = SQR(151): E1 = E / 5

M1 = M * 480 / 640

PI = 4 * ATN(1)

FOR i = -E1 TO E1 STEP 2 * E1 / M1

d = d + 1

FOR j = -E1 TO E1 STEP 2 * E1 / (M + .5)

c = c + 1

a = j: b = i: k = 0: d1 = SQR(a * a + b * b): R = d1

DO

IF k = 0 THEN f = a: g = b: R = d1

k = k + 1

x = f: y = g: L = R

IF x = 0 AND y > 0 THEN W = PI / 2

IF x = 0 AND y < 0 THEN W = -PI / 2

IF x > 0 AND y > 0 THEN W = ATN(y / x)

IF x > 0 AND y < 0 THEN W = ATN(y / x)

IF x < 0 AND y > 0 THEN W = ATN(y / x) + PI

IF x < 0 AND y < 0 THEN W = ATN(y / x) - PI

IF x >= 0 AND y = 0 THEN W = PI / 2

IF x < 0 AND y = 0 THEN W = -PI / 2

IF INT(W / PI) MOD 2 = 1 THEN W = 2 * W ELSE W = 2 * PI - 2 * W

f = R ^ s * COS(s * W) + a

g = R ^ s * SIN(s * W) + b

R = SQR(f * f + g * g)

IF R / L > .95 AND R / L < 1.05 THEN KK = 1: GOTO 4

LOOP UNTIL ABS(L - R) < 10 ^ -3 OR k > INT(E * E) OR R >= E

IF R / L < 1 THEN t = -1 ELSE t = 1

KK = INT(E * E - t * k)

4 LINE (c * 640 / M, d * 480 / M1)-((c + 1) * 640 / M, (d + 1) * 480 / M1), KK MOD 8, BF

IF c > M THEN c = 0

NEXT j

NEXT i

5 REM beep delay loop

BEEP

FOR j = 1 TO 100: NEXT j

IF INKEY$ = "" THEN GOTO 5

END



PRINT " INPUT partition: try 640 for slow 64 for fast:"

INPUT M

SCREEN 12

e = 1.11

M1 = M * 480 / 640

pi = 4 * ATN(1)

s = LOG(2) / LOG(3): v = 2 - s

FOR i = -3 TO 4 STEP 7 / M1

d = d + 1

FOR j = -3 TO 4 STEP 7 / (M + .5)

c = c + 1

a = j: b = i: k = 0: d1 = a + b: r = d1

DO

IF k = 0 THEN f = a: g = b: r = d1

k = k + 1

x = 1 - ABS(f - g)

y = 1 - ABS(f + g - 1)

l = r

f1 = 1 - SQR(ABS(x * y)) ^ s

g1 = ABS((x + y) / 2) ^ v

f2 = SQR(ABS(x * y)) ^ s

g2 = 1 - ABS((x + y) / 2) ^ v

IF k MOD 2 = 1 THEN f = f1: g = g1 ELSE f = f2: g = g2

r = f + g

LOOP UNTIL ABS(l - r) < 10 ^ -3 OR k > 64 OR r >= e OR r < .05

IF f > 0 THEN h = 1 ELSE h = 0

LINE (c * 640 / M, d * 480 / M1)-((c + 1) * 640 / M, (d + 1) * 480 / M1), h + k MOD 15, BF

IF c > M THEN c = 0

NEXT j

NEXT i

5 REM beep delay loop

BEEP

FOR j = 1 TO 100: NEXT j

IF INKEY$ = "" THEN GOTO 5

END

PRINT " INPUT partition: try 640 for slow 64 for fast:"

INPUT M

SCREEN 12

e = 2

M1 = M * 480 / 640

pi = 4 * ATN(1)

S = LOG(2) / LOG(3): V = 2 - S

FOR i = -1.5 TO 2.5 STEP 4 / M1

d = d + 1

FOR j = -1.5 TO 2.5 STEP 4 / (M + .5)

c = c + 1

a = j: b = i: k = 0: d1 = a + b: R = d1

DO

IF k = 0 THEN f1 = a: g1 = b: R1 = d1: f2 = a: g2 = b: R2 = d1: w = pi / 4: R = R1 + R2

k = k + 1

L = R

x1 = f1: y1 = g1: l1 = R1

f1 = (1 - ABS(x1 - y1) ^ S) * COS(w)

g1 = (1 - ABS(x1 + y1 - 1) ^ V) * COS(w)

R1 = f1 + g1

x2 = f2: y2 = g2: l2 = R2

f2 = (ABS(x2 - y2) ^ S) * SIN(w)

g2 = (ABS(x2 + y2 - 1) ^ V) * SIN(w)

R2 = f2 + g2

IF R1 <> 0 THEN w = ATN(R2 / R1) ELSE w = 0

R = R2 + R1

IF R / L > .95 AND R / L > 1.05 AND R > LOG(3) / LOG(2) THEN H = 1: GOTO 4 ELSE H = 0

LOOP UNTIL ABS(L - R) < 10 ^ -3 OR k > 64 OR R >= e

4 LINE (c * 640 / M, d * 480 / M1)-((c + 1) * 640 / M, (d + 1) * 480 / M1), (H + k) MOD 16, BF

IF c > M THEN c = 0

NEXT j

NEXT i

5 REM beep delay loop

BEEP

FOR j = 1 TO 100: NEXT j

IF INKEY$ = "" THEN GOTO 5

END

REM MOD 2 FRACTAL FUZZY FIELD

REM WITH CONVERGENCE RIM DOMAIN ALGORITHM

REM BY R. L. BAGULA 16 MARCH 1996 COPY RIGHTS RESERVED

PRINT " INPUT partition: try 640 for slow 64 for fast:"

INPUT M

SCREEN 12

e = LOG(3) / LOG(2)

M1 = M * 480 / 640

pi = 4 * ATN(1)

s = LOG(4) / LOG(3): v = 2 - s

FOR i = -1.25 TO 2.25 STEP 3.5 / M1

d = d + 1

FOR j = -1.25 TO 2.25 STEP 3.5 / (M + .5)

c = c + 1

a = j: b = i: K = 0: d1 = a + b: R = d1

DO

IF K = 0 THEN F = a: G = b: R = d1

K = K + 1

x = F: y = G: L = R

F1 = 1 - ABS(x - y) ^ s

G1 = 1 - ABS(x + y - 1) ^ v

F2 = ABS(x - y) ^ s

G2 = ABS(x + y - 1) ^ v

IF K MOD 2 = 1 THEN F = F1: G = G1 ELSE F = F2: G = G2

R = F + G

IF R / L > 1 - .05 AND R / L < 1.05 THEN H = 1: GOTO 4 ELSE H = 0

LOOP UNTIL ABS(L - R) < 10 ^ -3 OR K > 64 OR R >= e

4 LINE (c * 640 / M, d * 480 / M1)-((c + 1) * 640 / M, (d + 1) * 480 / M1), (K + H) MOD 16, BF

IF c > M THEN c = 0

NEXT j

NEXT i

5 REM beep delay loop

BEEP

FOR j = 1 TO 100: NEXT j

IF INKEY$ = "" THEN GOTO 5

END



PRINT " INPUT partition: try 640 for slow 64 for fast:"

INPUT M

SCREEN 12

e = LOG(3) / LOG(2)

M1 = M * 480 / 640

pi = 4 * ATN(1)

S = LOG(4) / LOG(3)

V = 2 - S

FOR i = -1.5 TO 2.5 STEP 4 / M1

d = d + 1

FOR j = -1.5 TO 2.5 STEP 4 / (M + .5)

c = c + 1

a = j: b = i: k = 0: d1 = a + b: r = d1

DO

IF k = 0 THEN f = a: g = b: r = d1

k = k + 1

x = f: y = g: l = r

f = 1 - ABS(x - y) ^ S

g = 1 - ABS(x + y - 1) ^ V

r = f + g

LOOP UNTIL ABS(l - r) < 10 ^ -3 OR k > 64 OR r >= e

LINE (c * 640 / M, d * 480 / M1)-((c + 1) * 640 / M, (d + 1) * 480 / M1), k MOD 16, BF

IF c > M THEN c = 0

NEXT j

NEXT i

5 REM beep delay loop

BEEP

FOR j = 1 TO 100: NEXT j

IF INKEY$ = "" THEN GOTO 5

END



PRINT " INPUT partition: try 640 for slow 64 for fast:"

INPUT M

SCREEN 12

e = LOG(3) / LOG(2)

M1 = M * 480 / 640

pi = 4 * ATN(1)

FOR i = -1! TO 3 STEP 4 / M1

d = d + 1

FOR j = -1! TO 3 STEP 4 / (M + .5)

c = c + 1

a = j: b = i: k = 0: d1 = a + b: r = d1

DO

IF k = 0 THEN f = a: g = b: r = d1

k = k + 1

x = f: y = g: l = r

IF k MOD 2 = 1 THEN x = f - .5: : y = g - .5: r = SQR(x * x + y * y)

IF x = 0 AND y > 0 THEN w = pi / 2

IF x = 0 AND y < 0 THEN w = -pi / 2

IF x > 0 AND y > 0 THEN w = ATN(y / x)

IF x > 0 AND y < 0 THEN w = ATN(y / x)

IF x < 0 AND y > 0 THEN w = ATN(y / x) + pi

IF x < 0 AND y < 0 THEN w = ATN(y / x) - pi

IF x >= 0 AND y = 0 THEN w = pi / 2

IF x < 0 AND y = 0 THEN w = -pi / 2



f1 = 1 - ABS(x - y)

g1 = 1 - ABS(x + y - 1)

f2 = r ^ 4 * COS(4 * w)

g2 = r ^ 4 * SIN(4 * w)

IF k MOD 2 = 1 THEN f = f1: g = g1 ELSE f = f2: g = g2

r = ABS(f) + ABS(g)

LOOP UNTIL ABS(l - r) < 10 ^ -3 OR k > 64 OR r >= e

LINE (c * 640 / M, d * 480 / M1)-((c + 1) * 640 / M, (d + 1) * 480 / M1), k MOD 8, BF

IF c > M THEN c = 0

NEXT j

NEXT i

5 REM beep delay loop

BEEP

FOR j = 1 TO 100: NEXT j

IF INKEY$ = "" THEN GOTO 5

END



PRINT " INPUT partition: try 640 for slow 64 for fast:"

INPUT M

PRINT "input general power:Z'=Z^S+C"

INPUT s

SCREEN 12

e = SQR(151): e1 = e / 10

M1 = M * 480 / 640

pi = 4 * ATN(1)

FOR i = -e1 TO e1 STEP 2 * e1 / M1

d = d + 1

FOR j = -e1 - 1 TO e1 - 1 STEP 2 * e1 / (M + .5)

c = c + 1

a = j: b = i: k = 0: d1 = SQR(a * a + b * b): r = d1

DO

IF k = 0 THEN f = a: g = b: r = d1: x0 = f: y0 = g: x00 = f: y00 = g

k = k + 1

x = f: y = g: l = r

IF x = 0 AND y > 0 THEN w = pi / 2

IF x = 0 AND y < 0 THEN w = -pi / 2

IF x > 0 AND y > 0 THEN w = ATN(y / x)

IF x > 0 AND y < 0 THEN w = ATN(y / x)

IF x < 0 AND y > 0 THEN w = ATN(y / x) + pi

IF x < 0 AND y < 0 THEN w = ATN(y / x) - pi

IF x >= 0 AND y = 0 THEN w = pi / 2

IF x < 0 AND y = 0 THEN w = -pi / 2

f = 3 * x - 3 * x0 + x00 + r ^ s * COS(s * w) + a

g = 3 * y - 3 * y0 + y00 + r ^ s * SIN(s * w) + b

r = SQR(f * f + g * g)

x00 = x0: y00 = y0

x0 = x: y0 = y

LOOP UNTIL ABS(l - r) < 10 ^ -3 OR k > INT(e * e) OR r >= e

IF ABS(r - l) <= 1 THEN t = 1 ELSE t = -1

kk = INT(e * e - t * k)

LINE (c * 640 / M, d * 480 / M1)-((c + 1) * 640 / M, (d + 1) * 480 / M1), kk MOD 16, BF

IF c > M THEN c = 0

NEXT j

NEXT i

5 REM beep delay loop

BEEP

FOR j = 1 TO 100: NEXT j

IF INKEY$ = "" THEN GOTO 5

END

PRINT " INPUT partition: try 640 for slow 64 for fast:"

INPUT M

PRINT "input general power:Z'=Z^S+C"

INPUT s

SCREEN 12

e = SQR(151): e1 = e / 7

M1 = M * 480 / 640

pi = 4 * ATN(1)

FOR i = -e1 TO e1 STEP 2 * e1 / M1

d = d + 1

FOR j = -e1 TO e1 STEP 2 * e1 / (M + .5)

c = c + 1

a = j: b = i: k = 0: d1 = SQR(a * a + b * b): r = d1

DO

IF k = 0 THEN f = a: g = b: r = d1

k = k + 1

x = f: y = g: l = r

IF x = 0 AND y > 0 THEN w = pi / 2

IF x = 0 AND y < 0 THEN w = -pi / 2

IF x > 0 AND y > 0 THEN w = ATN(y / x)

IF x > 0 AND y < 0 THEN w = ATN(y / x)

IF x < 0 AND y > 0 THEN w = ATN(y / x) + pi

IF x < 0 AND y < 0 THEN w = ATN(y / x) - pi

IF x >= 0 AND y = 0 THEN w = pi / 2

IF x < 0 AND y = 0 THEN w = -pi / 2

f = -x + r ^ (-s) * COS(-s * w) - r ^ (-1) * COS(-w) + r ^ s * COS(s * w) + a

g = -y + r ^ (-s) * SIN(-s * w) - r ^ (-1) * SIN(-w) + r ^ s * SIN(s * w) + b

r = SQR(f * f + g * g)

LOOP UNTIL ABS(l - r) < 10 ^ -3 OR k > INT(e * e) OR r >= e

IF ABS(r - l) <= 1 THEN t = 1 ELSE t = -1

kk = INT(e * e - t * k)

LINE (c * 640 / M, d * 480 / M1)-((c + 1) * 640 / M, (d + 1) * 480 / M1), kk MOD 16, BF

IF c > M THEN c = 0

NEXT j

NEXT i

5 REM beep delay loop

BEEP

FOR j = 1 TO 100: NEXT j

IF INKEY$ = "" THEN GOTO 5

END

Tent Operations on IFS Sierpinski





By Malcolm Lichtenstein





' Tent operations on IFS Sierpinski

print "Press any key for second display"

' left figure - tent on IFS Sierpinski

' right figure - inverse tent on IFS Sierpinski

screen 12

window screen (0,0)-(500,200)

defdbl a-z

randomize timer

numit=100000



a(1)=0:b(1)=0

a(2)=0:b(2)=1

a(3)=1:b(3)=0

R=rnd(1):x=r



for n=1 to numit

k=int(rnd(1)*3)+1

x=x/2+a(k)

y=y/2+b(k)

'0,1,0 tent for range 0 to 2

x=1-abs(1-x)

y=1-abs(1-y)

pset (x*200+20,y*100+20)

next n



' inverse tent

x=r

for n=1 to numit

k=int(rnd(1)*3)+1

x=x/2+a(k)

y=y/2+b(k)

'1,0,1 -- inverse tent for range 0 to 2

x=abs(1-x)

y=abs (1-y)

pset(x*200+270, y*100+20)

next n




This page hosted by Get your own Free Home Page
Hosted by www.Geocities.ws

1