Thought-provoking quotes in philosophy, art, and science

You can coherently describe a state of nothingness. It’s a state in which everything is not self-identical. If for all x, x is unequal to x; that sentence in logic describes a state of nothingness. It doesn’t help the imagination, but it doesn’t give rise to any contradictions. It can only be true if nothing exists, because if anything exists, it equals itself.
To what extent does the notion of fundamental particles even make sense? It’s extremely common for a theory to have two or more (equivalent) descriptions in terms of different sets of basic particles or fields. E.g., the use of the Jordan-Wigner transform shows that there is an equivalence between certain spin chains and systems of free Fermi particles. The answer to the question “Is the system really a set of spins or a set of free fermions?” is ambiguous. It depends not on properties intrinsic to the system, but rather on other external systems to which it is coupled (for, e.g., state preparation and measurement). This is absolutely remarkable! It means the question “what is this system made of?” in some sense depends on the other systems which interact with it, that is, is not entirely an intrinsic property of the system itself. Change those other systems, and there may be a sense in which you change what the system is built of.
The dream of personal computing in the 1960s was the dream of making a computer authoring medium for all.
By definition, something is interesting if it demands more thought. A fact that is obvious and totally understood does not require further thought. The fact that 6 times 7 is 42 is totally comprehensible and uninteresting. It’s when we are not certain about ideas that we need to confirm them and think about them. The search for better patterns will always be interesting.
The assumption of the bare valuelessness of mere matter led to a lack of reverence in the treatment of natural or artistic beauty. Just when the urbanisation of the western world was entering upon its state of rapid development, and when the most delicate, anxious consideration of aesthetic qualities of the new material environment was requisite, the doctrine of the irrelevance of such ideas was at its height. In the most advanced industrial countries, art was treated as frivolity.
The core claim of Buddhism is that the reason we suffer, and the reason we make other people suffer is because we don't see the world clearly. This is an amazing claim because it says that there's a natural convergence between the truth about the world out there, moral truth, and well being.
One of the most interesting things about Buddhist philosophy is that it posits two ways to appreciate the truth of its claims: there is the way familiar in Western philosophy where you just make arguments for them—and Buddhist philosophers have done that through the ages, but there's also the direct experiential apprehension of the truth of these things.
Immanence can only be immanent to what is and since the future is not, the ontological position of immanence can make no speculative claim on the future without reintroducing a transcendence of the future beyond what is.
No one can desire to be blessed, to act rightly, and to live rightly, without at the same time wishing to be, act, and to live—in other words, to actually exist.
Part of the surcharge that we pay for panpsychism (not, after all, itself an immediately plausible ontology) is that we must give up on the commonsense distinction between the experience and the experiencer. At the basic level, headaches have themselves.
We can’t prove that we are conscious; but that is hardly surprising since there is no more secure premise from which such a proof could proceed.
I suggest that a theory of consciousness should take experience as fundamental. We know that a theory of consciousness requires the addition of something fundamental to our ontology, as everything in physical theory is compatible with the absence of consciousness. We might add some entirely new nonphysical feature, from which experience can be derived, but it is hard to see what such a feature would be like. More likely, we will take experience itself as a fundamental feature of the world, alongside mass, charge, and space-time. If we take experience as fundamental, then we can go about the business of constructing a theory of experience.
If you're an artist, you might begin to use sketch pads or complex software programs to layer things one on top of the other. Under those conditions we can solve problems and produce works of art that we couldn't otherwise produce. If you then turn around and say yeah but all the cogntive work is being done by the bit that's inside the head, then it's not really quite so clear why we shouldn't be able to do this just by doing stuff inside the head.
The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false.
Il faut comprendre que lorsque quelqu'un a fait une découverte essentielle dans l'histoire de l'humanité, il ne sait pas comment dire : il n'existe pas de système conceptuel, pas de terminologie, c'est à dire de mot, pour dire ça. Voilà pourquoi la philosophie c'est du charabia : lorsque Husserl veut vous dire ce qu'il veut dire il utilise transcendantal qu'il n'emploie d'ailleurs pas dans le sens de Kant : ça devient difficile, voilà pourquoi d'ailleurs il y a des professeurs de philosophie pour ça et donc il faut que tout le monde gagne sa vie, il faut maintenir quelques emplois assez importants.
Some theorists had argued that disability was often a feature less of a person than of a built environment that failed to take some needs into account; the extended-mind thesis showed how clearly this was so.
In physics, there are both experimental and theoretical physicists, but there are fewer theoretical neuroscientists or psychologists—you have to do experiments, for the most part, or you can’t get a job. So in cognitive science this is a role that philosophers can play.
Understanding cannot be transmitted.
Joy follows pain not only because in the world a favorable event follows an unfavorable event, but largely because joy can follow pain.
Consciousness has an observer-independent existence, and so requires an explanation which is also observer-independent.
Whatever it is that we know, it is not separate from what we do to know it.
Every form of painting capable of moving us is in reality abstract, i.e. it is not content to reproduce the world but seeks to express the invisible power and invisible life that we are.
Once we get a complete knowledge of the causal structure of something (for example digestion), we do an ontological reduction: we say "digestion is nothing but all these processes". We do an ontological reduction based on the causal reduction. But you can't do that with consciousness. In the case of consciousness, you get a causal reduction but no ontological reduction.
We are not physical machines that have somehow learned to think, but thoughts that have learned to create a physical machine.
Civilization is revving itself into a pathologically short attention span. The trend might be coming from the acceleration of technology, the short-horizon perspective of market-driven economics, the next-election perspective of democracies, or the distractions of personal multi-tasking. All are on the increase. Some sort of balancing corrective to the short-sightedness is needed-some mechanism or myth which encourages the long view and the taking of long-term responsibility, where 'long-term' is measured at least in centuries.
A significant barrier to progress in computer science is the fact that many practitioners are ignorant of the history of computer science: old accomplishments (and failures!) are forgotten, and consequently the industry reinvents itself every 5-10 years.
Thinking, like life, is never complete, it is a possibility that never exhausts itself.
From the perspective of the materialist, there's nothing more real than the atom (let's say). From the perspective of a philosopher of being, there's nothing more real than suffering.
The most remarkable thing about the brain is that it wasn't designed. As a result, how the brain works is largely an irrelevant question. Brain architecture is the result of an optimization process. So what really matters for reproducibility is the optimization problem itself. The hypothesis space, the objective function(s). Everything else is noise. Essentially: it doesn't matter what the answer is, because one can mechanically derive it once the question has been asked. What matters is the question.
In the theory with which we have to deal, Absolute Ignorance is the artificer; so that we may enunciate as the fundamental principle of the whole system, that, in order to make a perfect and beautiful machine, it is not requisite to know how to make it. This proposition will be found, on careful examination, to express, in condensed form, the essential purport of the Theory, and to express in a few words all Mr. Darwin's meaning; who, by a strange inversion of reasoning, seems to think Absolute Ignorance fully qualified to take the place of Absolute Wisdom in all of the achievements of creative skill.
Mathematics are an intricate combination of inventions and discoveries. We invent the concepts, and then we discover the relations among the concepts. For example: there is no square root of -1. At some point, humans had to invent that concept. Once they invented the concept, they discovered that there are all kinds of mathematics they can do with that.
Moore's law looks like a physical law and people talk about it that way. But actually if you're living it, it doesn't feel like a physical law. It's really a thing about human activity. It's about vision, it's about what you are allowed to believe, because people are really limited by their beliefs. They limit themselves by what they allow themselves to believe as possible. So when Gordon Moore made that observation, he really gave us all permission to believe that would keep going.
The operating system was a layering of software that tried to turn a sometimes poorly designed machine into one that was more fruitful for the software environment.
We live in a world of applications. But if you think about it, applications are one of the worst ideas anybody had, and we thought we got rid of them at PARC in the 70s, because applications draw a barrier around your ability to do things. What you really want is to gather the resources you need to you and make the things that you want out of them.
Free software has a systemic devaluation of UX and end-user needs, leading to elitism founded on mastery of bad tools.
The world is a dynamic mess of jiggling things–if you look at it right. And if you magnify it, you can hardly see anything anymore, because everything is jiggling in their own patterns, and there are lots of little balls and... It's lucky that we have such a large scale of view of everything that we can see them as things, without having to worry about all these little atoms all the time.
A drawing is an archive of its maker's muscles.
Hardware: the parts of a computer system that can be kicked.

Dynamics (physics) simulations consist of objects and data. Objects are merely repositories of data. When you see an object, such as a 3D ball on your screen, it’s because the object has geometry data attached to it, which by convention a 3D software package draws in the viewer and renders.

The data attached to objects is, in some sense, arbitrary: it can be pretty much anything. There are no constraints on how a piece of data is named or what it contains. But only certain names and types of data are meaningful to dynamics solvers.

Artificial intelligence will not replace artists, but artists who decide to not use it, could be replaced by artists that do.
I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and diligent -- their place is the General Staff. The next lot are stupid and lazy -- they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must beware of anyone who is stupid and diligent -- he must not be entrusted with any responsibility because he will always cause only mischief.
Pour des raisons suffisamment évidentes chaque génération traite la vie qu'elle trouve à son arrivée dans le monde comme une donnée définitive, hors les détails à la transformation desquels elle est intéressée. C'est une conception avantageuse, mais fausse. A tout instant, le monde pourrait être transformé dans toutes les directions, ou du moins dans n'importe laquelle; il a ça, pour ainsi dire, dans le sang. C'est pourquoi il serait original d'essayer de se comporter non pas comme un homme défini dans un monde défini où il n'y a plus, pourrait-on dire, qu'un ou deux boutons à déplacer (ce qu'on appelle évolution) mais, dès le commencement, comme un homme né pour le changement dans un monde créé pour changer.
Failure free operations require experience with failure.
Most people find the concept of programming obvious, but the doing impossible.

Imagine a world in which physicists did not have a single concept of equations or a standard notation for writing them. Suppose that physicists studying relativity wrote the "einsteinian" m ↗ c2 ↩ E instead of E = mc2, while those studying quantum mechanics wrote the "heisenbergian" ; and that physicists were so focused on the syntax that few realized that these were two ways of writing the same thing.

Computer scientists are so focused on the languages used to describe computation that they are largely unaware that those languages are all describing state machines.

Computational processes are abstract beings that inhabit computers. As they evolve, processes manipulate other abstract things called data. The evolution of a process is directed by a pattern of rules called a program. People create programs to direct processes. In effect, we conjure the spirits of the computer with our spells.

A computational process is indeed much like a sorcerer's idea of a spirit. It cannot be seen or touched. It is not composed of matter at all. However, it is very real. It can perform intellectual work. It can answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory. The programs we use to conjure processes are like a sorcerer's spells. They are carefully composed from symbolic expressions in arcane and esoteric programming languages that prescribe the tasks we want our processes to perform.

Mechanics is the study of those systems for which the approximations of mechanics work successfully.
The "easy to learn and natural to use" has became a sort of a god to follow. The marketplace is driving it, and it’s successful, and you could market on that basis. But how do you ever migrate from a tricycle to a bicycle? A bicycle is very unnatural and hard to learn compared to a tricycle, and yet in society it has superseded all the tricycles for people over five years old. So the whole idea of high-performance knowledge work is yet to come up and be in the domain.
Looking back, I think we made a complete mistake when we were doing the interface at PARC because we assumed that the kids would need an easy interface because we were going to try and teach them to program and stuff like that, but in fact they are the ones who are willing to put hours into getting really expert at things.
User tests revealed that casual users don’t easily understand the idea of references, or having multiple references to the same value. It is easier to understand a model in which values are copied or moved, rather than assigning references.
Have you ever put together a jigsaw puzzle? Takes a lot of time and effort, right? What if, when you were finished putting it together, a friend of yours looked at it and said "Yup, looks done" — and then started telling everyone that HE was the one who solved it?
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

It has always seemed odd to me that computer science refers to the number of operations necessary to execute an algorithm as the “complexity” of the algorithm. For example, an algorithm that takes O(N3) operations is more complex than an algorithm that takes O(N2) operations. The big-O analysis of algorithms is important, but “complexity” is the wrong name for it.

Software engineering is primarily concerned with mental complexity, the stress on your brain when working on a piece of code. Mental complexity might be unrelated to big-O complexity. They might even be antithetical: a slow, brute-force algorithm can be much easier to understand that a fast, subtle algorithm.

Layers, modules, indeed architecture itself, are means of making computer programs easier to understand by humans. The numerically optimal method of solving a problem is almost always an unholy tangled mess of non-modular, self-referencing or even self-modifying code - whether it's heavily optimized assembler code in embedded systems with crippling memory constraints or DNA sequences after millions of years of selection pressure. Such systems have no layers, no discernible direction of information flow, in fact no structure that we can discern at all. To everyone but their author, they seem to work by pure magic.

In software engineering, we want to avoid that. Good architecture is a deliberate decision to sacrifice some efficiency for the sake of making the system understandable by normal people. Understanding one thing at a time is easier than understanding two things that only make sense when used together. That is why modules and layers are a good idea.

In my opinion, what many people mean by the word "understand" simply isn't practical or relevant to mathematics. For example, it often carries the connotation that understanding something means reducing it to something obvious (e.g. something the speaker can "picture"), or that understanding is about what something "is" rather than about how you can use it.
The industrial explosion on earth began just two or three hundreds years ago. Now if technological civilizations can last tens of thousands of years, how do you explain the extraordinary coincidence that you were born in the first few generations of this one?
The ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal.
L'expérience mystique est une expérience corporelle, et non pas simplement intellectuelle, de l'acquisition et de la transmission du savoir. C'est l'idée que la connaissance s'acquiert par la chair, par le déni et le contrôle de la chair, à travers la souffrance exercée sur son propre corps. Ce monde-là, cette "forme de vie" qui est celle du ressenti, de la compréhension corporelle, est un monde qui n'est plus immédiatement intelligible passé la fin du XVIIe siècle. Michel de Certeau l'avait élégamment montré, et Pierre Chaunu l'avait souligné dans un article sur ce qu'il appelle le "tournant antimystique" de l'Europe catholique dans les années 1660-1680 . Il s'opère au fil du XVIIe siècle une espèce de grande remise en ordre des savoirs et des expériences, qui dissocie de plus en plus la connaissance de l'expérience corporelle. Et c'est la montée en puissance de la médiation du texte, du livre comme support du savoir, qui autorise cette disqualification du corps comme lieu du savoir. On n'apprend plus dans une relation corporelle en face à face avec un maître ou avec Dieu, on transite par le corps intermédiaire d'un imprimé, de quelque chose qu'on peut "détacher" d’une situation donnée. A la fin du XVIIe siècle, ce monde d'avant, cet univers de l’expérience mystique, n'est déjà plus compréhensible, alors comment le serait-il aujourd'hui ?
Museums have separated art from normal experience.