We are searching data for your request:
Upon completion, a link will appear to access the found materials.
There are examples of mathematics in nature such as Fibonacci number, fractals, etc. Are there examples of encrypted information?
What I am looking for is a pattern that seems random on the surface, but once you put it through a "decryption" process, it is actually something else. Not patterns that seem random because they are complex.
One example that comes close is DNA. But I see DNA as a form of encoding, not encryption.
One example is in the development of the immune system. V(D)J recombination, antibody production, and T-cell receptor production generates a specific "key" that, in most cases, can only be bypassed by that individual's molecular inventory.
In that sense, parasites, mutualists, and commensalists could be viewed as black, white, and grey hat hackers, respectively.
Another example is the organism-specific tRNA synthase inventory and the matching tRNAs.
One example is sperm-egg attraction. Many species release their egg and sperm to be joined in water. The egg must try to avoid being fertilized by the sperm of a different species. So an egg secretes a special peptide which attracts the sperm to its location. The amino acid sequence of that peptide is unique for each species and sperm can only recognize the peptide secreted by the egg of their own species. To any other sperm, the peptide is just another molecule present in the water. http://www.ncbi.nlm.nih.gov/books/NBK10010/
Perhaps protein folding is an example of decryption? From a single primary sequence (i.e. the string of amino acids), it is impossible to guess the function of a protein. The folded protein is the functional unit (of course it might be further modified).
I would think that the Bacteria - Phage restriction enzyme and methylation enzyme race could be thought of as a form of encryption. If your DNA isn't "signed" with the appropriate methylation patterns then it gets chopped into pieces and destroyed. This is the signing/verifying side of encryption rather then the encrypt/decrypt side, however.
I actually think I may have come up with a biological analogy to encryption.
If you define encryption as taking information, breaking it up into random pieces or just small packets of data, that are sent on their way and when they reach a source are put back together based on a key, so that unintended recipients cannot interpret the information, then I would say that how the senses process the information that we experience, process it, send it along its way and then it is recompiled to form our experience of reality, is something akin to encryption/decryption.
Take vision. Our eye receives and focuses photons of light into electrochemical signals that get relayed to the visual cortex. In the visual cortex, that data is reassembled into our experience of reality of the world around us.
If, experimentally, we were able to tap into that signal running along the optic nerve and tried to interpret it just from the data flow, then it is unlikely that we would be able to reproduce the image being collected the way we could read the electrical impulse being transmitted from a CMOS or CCD sensor chip. I say this because our visual system is predictive. There is actually more information generated by the visual cortex than is being received and transmitted from the eyes.
We develop these keys in early child development and lay down these neural networks so that we are able to build and interpret an image without all of the data. This is how we can navigate through the world, drive a car, fly a jet fighter and cope with the signal delay that is inherent in the system. This may be more an example of compression of signal, but a compressed signal without the correct decompression algorithm would be a rudimentary form of encryption.
We can also see, with synesthesia, that when there is crosstalk between the regions of the brain that are are different from the norm, messages will be perceived in possibly unintended ways. Letters and numbers have colors, sounds produce visual imagery, etc.
Not sure we can name it encryption, but
a pattern that seems random in surface but once you put it through a decryption process, it is actually something else
reminds me about some forms of predatory camouflage. I'm talking from the perspective of the prey: the predator blends with the environment and what the prey perceives is just 'random' environment. When the prey 'decrypts' that there is actually something else, it is generally too late.
Predator prey situations are assumed to have a reason for encryption, as they have signaling. Thus could be the change in appearance of a coiling snake, not only preparing to strike, but sending fake codes to prey at the same time, or coiling differently for mating.
On the recieving side, the timing of the length of gaze and head movements of deer in the midst of observing a scene might be considered cryptoanalysis, as they absorb and process data and carefully select responses and countermeasures.
Interspecies communication at the human level obviously includes cryptography, it could be surmised that most animals and plants probably do some version of communication and hence, some cryptography (as all have predators.) The aforementioned viral example seems very apt, also interesting might be communication within a larger group, biofilm exhibiting herd-like communication in the presence of complex predators or anti-biotics for example. If whales communicate in the ocean, perhaps bacteria and viruses communicate and encrypt non-chemically in biofilm and hosts.
What is Encryption? (with pictures)
Encryption refers to algorithmic schemes that encode plain text into non-readable form or cyphertext, providing privacy. The receiver of the encrypted text uses a "key" to decrypt the message, returning it to its original plain text form. The key is the trigger mechanism to the algorithm.
Passwords are usually encrypted by the browser to prevent anyone other than the recipient from being able to access it.
Until the advent of the Internet, encryption was rarely used by the public, but was largely a military tool. Today, with online marketing, banking, healthcare and other services, even the average householder is much more aware of it.Https at the beginning of a URL means the site is secure.
Web browsers will encrypt text automatically when connected to a secure server, evidenced by an address beginning with https. The server decrypts the text upon its arrival, but as the information travels between computers, interception of the transmission will not be fruitful to anyone "listening in." They would only see unreadable gibberish.
There are many types of encryption and not all of them are reliable. The same computer power that yields strong encryption can be used to break weak schemes. Initially, 64-bit encryption was thought to be quite strong, but today 128-bit is the standard, and this will undoubtedly change again in the future.
Though browsers automatically encrypt information when connected to a secure website, many people choose to use encryption in their email correspondence as well. This can easily be accomplished with programs that feature plug-ins or interfaces for popular email clients. The most longstanding of these is called PGP (Pretty Good Privacy), a humble name for very strong military-grade encryption program. PGP allows one to not only encrypt email messages, but personal files and folders as well.
Encryption can also be applied to an entire volume or drive. To use the drive, it is "mounted" using a special decryption key. In this state the drive can be used and read normally. When finished, the drive is dismounted and returns to an encrypted state, unreadable by interlopers, Trojan horses, spyware or snoops. Some people choose to keep financial programs or other sensitive data on encrypted drives.
Encryption schemes are categorized as being symmetric or asymmetric. Symmetric key algorithms such as Blowfish, AES and DES, work with a single, prearranged key that is shared between sender and receiver. This key both encrypts and decrypts text. In asymmetric encryption schemes, such as RSA and Diffie-Hellman, the scheme creates a "key pair" for the user: a public key and a private key. The public key can be published online for senders to use to encrypt text that will be sent to the owner of the public key. Once encrypted, the cyphertext cannot be decrypted except by the one who holds the private key of that key pair. This algorithm is based around the two keys working in conjunction with each other. Asymmetric encryption is considered one step more secure than symmetric encryption, because the decryption key can be kept private.
Strong encryption makes data private, but not necessarily secure. To be secure, the recipient of the data — often a server — must be positively identified as being the approved party. This is usually accomplished online using digital signatures or certificates.
As more people realize the open nature of the Internet, email and instant messaging, encryption will undoubtedly become more popular. Without it, information passed on the Internet is not only available for virtually anyone to snag and read, but is often stored for years on servers that can change hands or become compromised in any number of ways. For all of these reasons, it is a goal worth pursuing.
Encryption turns plain text into cyphertext.
Consumer examples are plentiful, as every animal must consume food in order to live. Consumers are grouped into four categories – primary, secondary, tertiary, and quaternary. The category in which an animal is situated is defined by its food source within a specific food chain or food web, and not necessarily by its species or habits. For example, grizzly bears only have access to salmon at certain times of the year, while in the early spring diets are largely root-based and herbivorous. Depending on the available food source(s), a single species might be placed in different categories. The simple diagram below shows how simple it is to upset the flow of the trophic cascade of a food chain.
Examples of primary consumers are zooplankton, butterflies, rabbits, giraffes, pandas and elephants.
Primary consumers are herbivores. Their food source is the first trophic level of organisms within the food web, or plants. Plants are also referred to as autotrophs. Autotrophs produce their own energy from sunlight and basic nutrients via photosynthesis in any ecosystem, the terms producer and autotroph have the same meaning. The herbivorous diet does not only include leaves, branches, flowers, fruits and roots of plants, but also other autotrophic sources such as nectar and phytoplankton.
Primary consumers feed exclusively on autotrophs. Any organism that must eat in order to produce energy is both a heterotroph and a consumer. Rather confusingly, primary consumers are located in the second trophic level of the ecosystem. A trophic level is the position any organism occupies within any food chain. As vegetation is the most basic food source, plants are to be found at the first trophic level. Herbivores are positioned on the next rung of the trophic ladder, and are therefore primary consumers at the second trophic level.
Examples of secondary consumers are earwigs, ants, badgers, snakes, rats, crabs, hedgehogs, blue whales (their diet is primarily composed of phytoplankton-eating krill and zooplankton, and phytoplankton), lions, and humans.
Secondary consumers nearly always consume both producers and primary consumers and are therefore usually classed as omnivores. Secondary consumers make up the third trophic level of the food chain and are – as are all consumers – heterotrophs.
Examples of tertiary consumers are hawks, snakes, crocodiles and some big cats.
Tertiary consumers can be either omnivorous or carnivorous. They feed on primary and secondary consumers, and may also eat producers (plants). For a food chain to have a tertiary consumer, there must be a secondary consumer available for it to eat.
It is interesting to note that different organisms in different situations or at different times may occur at similarly different trophic levels. For example, human vegans are primary consumers of the second trophic level, but a large proportion of the human race are omnivores. Another example can be found in beef consumption before and after bovine spongiform encephalophathy (BSE) legislation, where it was eventually decided to stop beef cows from being fed meat- and bone meal. Before legislation was passed, human consumption of beef would class us as tertiary consumers, as cows eating an omnivorous diet are themselves classed as secondary consumers. After the link between bovine spongiform encephalopathy (BSE) and meat-based feeds, farms were only permitted to feed their herds plant-source diets. This means that humans currently eat beef as secondary consumers, as farms are only authorized to produce beef from primary consumers.
Examples of quaternary examples are the white shark, polar bear and alligator.
Quaternary consumers are not necessarily apex predators. An apex predator is at the top of the food chain in which it exists, and is not the living prey of any other organism. A quaternary consumer is simply a consumer which preys upon a tertiary consumer. To be classed as a quaternary consumer within a food chain or food web, there must be a tertiary consumer available for the quaternary consumer to prey upon. Quaternary consumers are found in the fifth trophic level and are not to be found in every food chain. The higher up the consumer ladder one goes, the more the energy required to support it. This is explained in the graphic below, where the size of each layer of the trophic pyramid indicates the ratio of each species to each other within a healthy food chain.
Wind turbines modeled after Humpback whales
Many of our modern aerodynamic designs rely on rather basic principles. To obtain optimal lift and minimal drag, sleek edges and clean lines are key. However, throughout the animal kingdom, many species, capable of exceptional lift. The Humpback whale, for example, uses bumpy, tubercle fins for propulsion — which seems rather counterintuitive.
A Harvard led research team determined that these nodules, enable the whales to choose a steeper “angle of attack.” The angle of attack is the angle between the flow of water and the face of the flipper. With Humpback whales, this attack angle can be up to 40 percent steeper than a smooth flipper. Due to these small ridges, sectional stalls occur at different points along the fin. This makes a full on stall much easier to avoid.
Tests conducted by the U.S. Naval Academy, using model flippers, determined these biomimetic fins reduced drag by nearly a third and improved lift by eight percent overall. Whale Power, a company based in Toronto, Canada has already capitalized on this latest tubercle tech. According to MIT, Whale Power’s biomimetic blades help generate the “same amount of power at 10 miles per hour that conventional turbines generate at 17 miles per hour.”
15 Uncanny Examples of the Golden Ratio in Nature
The famous Fibonacci sequence has captivated mathematicians, artists, designers, and scientists for centuries. Also known as the Golden Ratio, its ubiquity and astounding functionality in nature suggests its importance as a fundamental characteristic of the Universe.
We've talked about the Fibonacci series and the Golden ratio before , but it's worth a quick review. The Fibonacci sequence starts like this: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 and so on forever. Each number is the sum of the two numbers that precede it. It's a simple pattern, but it appears to be a kind of built-in numbering system to the cosmos. Here are 15 astounding examples of phi in nature.
The Fibonacci Series: When Math Turns Golden
The Fibonacci Series, a set of numbers that increases rapidly, began as a medieval math joke…
Leonardo Fibonacci came up with the sequence when calculating the ideal expansion pairs of rabbits over the course of one year. Today, its emergent patterns and ratios (phi = 1.61803. ) can be seen from the microscale to the macroscale, and right through to biological systems and inanimate objects. While the Golden Ratio doesn't account for every structure or pattern in the universe, it's certainly a major player. Here are some examples.
1. Flower petals
The number of petals in a flower consistently follows the Fibonacci sequence. Famous examples include the lily, which has three petals, buttercups, which have five (pictured at left), the chicory's 21, the daisy's 34, and so on. Phi appears in petals on account of the ideal packing arrangement as selected by Darwinian processes each petal is placed at 0.618034 per turn (out of a 360° circle) allowing for the best possible exposure to sunlight and other factors.
2. Seed heads
The head of a flower is also subject to Fibonaccian processes. Typically, seeds are produced at the center, and then migrate towards the outside to fill all the space. Sunflowers provide a great example of these spiraling patterns.
In some cases, the seed heads are so tightly packed that total number can get quite high — as many as 144 or more. And when counting these spirals, the total tends to match a Fibonacci number. Interestingly, a highly irrational number is required to optimize filling (namely one that will not be well represented by a fraction). Phi fits the bill rather nicely.
Similarly, the seed pods on a pinecone are arranged in a spiral pattern. Each cone consists of a pair of spirals, each one spiraling upwards in opposing directions. The number of steps will almost always match a pair of consecutive Fibonacci numbers. For example, a 3-5 cone is a cone which meets at the back after three steps along the left spiral, and five steps along the right.
4. Fruits and Vegetables
Likewise, similar spiraling patterns can be found on pineapples and cauliflower.
5. Tree branches
The Fibonacci sequence can also be seen in the way tree branches form or split. A main trunk will grow until it produces a branch, which creates two growth points. Then, one of the new stems branches into two, while the other one lies dormant. This pattern of branching is repeated for each of the new stems. A good example is the sneezewort. Root systems and even algae exhibit this pattern.
The unique properties of the Golden Rectangle provides another example. This shape, a rectangle in which the ratio of the sides a/b is equal to the golden mean (phi), can result in a nesting process that can be repeated into infinity — and which takes on the form of a spiral. It's call the logarithmic spiral, and it abounds in nature.
Snail shells and nautilus shells follow the logarithmic spiral, as does the cochlea of the inner ear. It can also be seen in the horns of certain goats, and the shape of certain spider's webs.
7. Spiral Galaxies
Not surprisingly, spiral galaxies also follow the familiar Fibonacci pattern. The Milky Way has several spiral arms, each of them a logarithmic spiral of about 12 degrees. As an interesting aside, spiral galaxies appear to defy Newtonian physics. As early as 1925, astronomers realized that, since the angular speed of rotation of the galactic disk varies with distance from the center, the radial arms should become curved as galaxies rotate. Subsequently, after a few rotations, spiral arms should start to wind around a galaxy. But they don't — hence the so-called winding problem . The stars on the outside, it would seem, move at a velocity higher than expected — a unique trait of the cosmos that helps preserve its shape.
Faces, both human and nonhuman, abound with examples of the Golden Ratio. The mouth and nose are each positioned at golden sections of the distance between the eyes and the bottom of the chin. Similar proportions can been seen from the side, and even the eye and ear itself (which follows along a spiral).
It's worth noting that every person's body is different, but that averages across populations tend towards phi. It has also been said that the more closely our proportions adhere to phi, the more "attractive" those traits are perceived. As an example, the most "beautiful" smiles are those in which central incisors are 1.618 wider than the lateral incisors, which are 1.618 wider than canines, and so on. It's quite possible that, from an evo-psych perspective, that we are primed to like physical forms that adhere to the golden ratio — a potential indicator of reproductive fitness and health.
Looking at the length of our fingers, each section — from the tip of the base to the wrist — is larger than the preceding one by roughly the ratio of phi.
Explanation and Teleology in Aristotle's Science of Nature
Mariska Leunissen, Explanation and Teleology in Aristotle's Science of Nature, Cambridge University Press, 2010, 250pp., $85.00 (hbk), ISBN 9780521197748.
Reviewed by Devin Henry, University of Western Ontario
No idea is more synonymous with Aristotle and none more fundamental to Aristotelian philosophy than teleology. So it is quite remarkable that there have been only two full-length monographs in English exclusively devoted to the subject -- Monte Johnson's Aristotle on Teleology (OUP 2005) and now Mariska Leunissen's Explanation and Teleology in Aristotle's Science of Nature. (There is one other monograph in Italian: D. Quarantotto, 2005, Causa finale, sostanza, essenza in Aristotele, Saggi sulla struttura dei processi teleologici naturali e sulla funzione dei telos, Napoli: Bibliopolis.) The strength of Leunissen's book, which sets it apart from other discussions of Aristotle's teleology, is that her interpretation is developed from a careful analysis of Aristotle's actual use of teleological explanations in the biological works, which is where most of the interesting material is to be found. She examines an impressive assortment of textual examples and offers a detailed exposition of their content. The result is a rich account of how Aristotle thinks teleological causation operates in nature and how final causes are to be integrated into a more comprehensive picture of explanation in natural science. Explanation and Teleology in Aristotle's Science of Nature is an important contribution to scholarship on Aristotle's teleology. And while Leunissen's will certainly not be the last word on the subject, her book has added significantly to the debate and must be engaged with by anyone wishing to tackle the subject from this point forward.
The main argument of the book is organized around three central ideas. First, Leunissen argues that in order to grasp Aristotle's teleology we need to make a distinction between two types of teleological causation, what she calls "primary" and "secondary" teleology. Second, explanations in natural science often make use of teleological principles (such as "nature does nothing in vain") which, according to Leunissen, function as heuristic devices: they are deployed by the natural scientist to help uncover those causally relevant features that are to be picked out in ultimate explanations. Third, the scientific value of final causes for Aristotle lies in their having explanatory rather than causal priority. Among other things, this has significance for how we understand Aristotle's puzzling remarks about demonstrations through final causes in Posterior Analytics II 11. My review will be devoted to a critical assessment of these three claims. And while I take issue with several aspects of Leunissen's interpretation, overall I found her arguments both illuminating and persuasive.
Primary versus Secondary Teleology
While Leunissen has something to say about elemental motion and heavenly bodies (see esp. Chapter 5), her central focus is on living things and their parts. This seems justified. For Aristotle twice says living things are substances "most of all" (Metaphysics 1041b28-31, 1043b19-23), and so we should expect organisms to exhibit teleology in the strictest sense. Leunissen's distinctive contribution to our understanding of Aristotle's natural teleology is her claim that, when it comes to organisms at least, Aristotle distinguishes two patterns of teleological causation:
In the first case, it is the presence of a preexisting potential for form that guides the actions of the formal nature and that thereby directs the teleological process of its realization. In the second case, it is the presence of certain material potentials that allows for certain teleological uses (and not for others) the actions of the formal nature in making use of these materials are secondary to the operation of material necessity that produced the materials in the first place. Both processes thus involve the goal-directed action of the formal nature -- which is why both processes qualify as being teleological, but in the first case, the actions are primarily 'driven by form' (e.g., the form of flyer requires the production of wings), in the second, they are primarily 'driven by matter' (e.g., the availability of hard materials allows for the production of protective parts like horns and hair). (p. 20 for a full discussion see pp. 18-22, 85-99)
According to Leunissen, primary teleology involves realizing a "preexisting potential for form". (If I may borrow a modern analogy, think of the execution of an inherited program that codes for certain traits.) Here the form being realized constitutes the final cause of the process leading up to it, while that process is said to be "for the sake of" that form precisely because it is the actualization of a potential for that end. Leunissen argues that this kind of teleological causation is associated with those parts that Aristotle describes as "conditionally necessary" for the execution of an organism's vital (survival) or essential (kind-defining) functions. Secondary teleology, by contrast, is at work in cases where Aristotle speaks of the formal nature "using" raw materials that have come to be owing to material (rather than conditional) necessity in order to bring about some good end. The parts that result from this type of causation are present, not because they are indispensable for the execution of some vital or essential function, but because they contribute in some way to the organism's well-being. As Leunissen puts it, secondary teleology generates parts that are present not for the sake of living but for living well (p. 19 see also pp. 89-95).
For example, suppose we define fish as blooded swimmers that cool themselves by taking in water. Like all blooded animals, fish must have a liver and a heart in order to survive (PA IV 12, 677a36-b5). And since they are (by definition) swimmers that breath in water, they must also have fins and gills. These parts are all included in the basic design of a fish, whose construction is coded in the developmental program executed by its formal nature (Leunissen's "potential for form"). Now suppose that certain materials arise during development that are not coded by the program but come to be, say, as a necessary byproduct of the process of making fins. Not being wasteful, nature will make use of this matter to add to the basic fish design. It may add a lateral line to help the fish better detect prey or it may give it a dorsal fin equipped with hard spines for added protection. None of these additional parts are absolutely required for being a fish, in the sense that nature could have designed a fish without those parts, but instead they are added to make the fish's life better in some way. In Leunissen's scheme, they are the result of "secondary" teleology.
On this interpretation, primary and secondary teleology are divided along two axes: in terms of the nature of the causal process involved (the former involves the actualization of a "preexisting potential for form" operating through "conditional necessity", while the latter involves the formal nature "using" extra materials whose presence is due to "material necessity") and in terms of the status of the parts that result from those processes (the former underwrites the formation of parts that are absolutely required for existence, while the latter underwrites the formation of subsidiary parts that contribute to the organism's well-being). Given the presence of these two patterns of teleological causation, Leunissen argues that in order to understand any particular application of natural teleology we must determine whether the formation of the end that constitutes the final cause is primarily driven by form (primary teleology) or by matter (secondary teleology). Chapter 4, §4.3 identifies two patterns of teleological explanations where the form is the causally primary factor and three patterns where the matter is the causally primary factor. The distinction between primary and secondary teleology forms the heart of Leunissen's interpretation, so I shall spend some time developing my evaluation of this idea before turning to her other two theses.
There are two ways that someone might respond to Leunissen here. First, one might agree that Aristotle makes a distinction between necessary parts that are present because the animal could not exist without them and subsidiary parts that are present because they contribute to its well-being (e.g., GA I 4, 717a15-17) but deny that this tracks a real distinction between kinds of teleological causation. For example, in GA II 6 Aristotle tells us how the formal nature can "use" materially necessitated changes to achieve its developmental goals (743a36-b8), which sounds like Leunissen's account of secondary teleology. Yet the parts that Aristotle attributes to this kind of teleological causation here include flesh, bones and sinew -- parts that are all necessary for the existence of an animal. Again, in GA II 4 we are told that the formal nature makes use of things that arise "from [material] necessity" for the sake of producing a set of extraembryonic membranes around the embryo (739b26-32). These are again not subsidiary parts that somehow enhance the animal's way of life but are absolutely critical for its very survival for no embryo could survive to adulthood unless it was surrounded by such membranes. Both of these examples suggest that the distinction Aristotle makes between necessary and subsidiary parts cannot be neatly mapped onto the distinction Leunissen sees between primary and secondary teleological causation.
The other way one might respond to Leunissen's interpretation is to deny that Aristotle's natural teleology can be divided into two discrete forms. In distinguishing primary teleology from secondary teleology Leunissen has certainly put her finger on some important differences in the way Aristotle understands teleological causation. But these differences may turn out to be more nuanced and continuous than Leunissen's strict dichotomy allows. Consider the following three examples, all of which involve final causation to different degrees:
Case 1 . Fins (cf. PA IV 13, 695b17-26). Both the raw materials and the part itself come to be for the sake of the function eventually performed by that part (e.g., swimming). Here all (or at least most) aspects of the part's development can be traced to the goal-directed actions of the animal's formal nature.
Case 2 . Horns (PA III 2, 663b21-22). In this case the formal nature takes raw materials that are already present owing to material (rather than conditional) necessity and fashions them into an organ capable of performing some useful function (e.g., defense).
Case 3. Omentum (PA IV 3, 677b21-8). Both the raw materials and the part itself come to be through material necessity alone. Here all aspects of the part's development can be traced to non-teleological changes arising from the organism's material nature. But once the part has come into being, it is then put to work in the mature organism for some useful function.
Leunissen identifies case (1) with primary teleology and cases (2) and (3) with secondary teleology (for the latter see Leunissen, pp. 92-5). But this dichotomy effaces certain similarities and differences between the three cases that seem equally important for understanding Aristotle's use of teleology.
First, as Leunissen notes, (1) differs from (2) and (3) in terms of the origins of the raw materials. In (1) the matter that is used to make the part is there because it is required for that part to perform its function. As Aristotle puts it, the matter is "conditionally necessary" for that end. By contrast, the raw materials used in (2) and (3) do not come to be for the sake of anything but owe their existence to material necessity alone. With horns, for example, Aristotle says that nature "borrows (katakechrêtai)" materials that are "present of necessity" (tois huparchousin ek anankês) for the sake of making something good (PA 663b21-2). However, there is also an important sense in which case (2) resembles case (1), which distinguishes them both from case (3). As Leunissen herself notes, in both (1) and (2) the development of the part itself is controlled by the goal-directed actions of the formal nature operating for the sake of an end (p. 20). Horns are made for defense just as much as fins are made for swimming. The fact that horns are made from raw materials that happen to result from non-teleological forces seems to be of little significance when compared with the teleological processes involved in transforming those materials into a functioning organ. At least Leunissen gives us no reason to think that in such cases the teleological actions of the formal nature in constructing the part should be considered "secondary" to the non-teleological changes that produced the raw materials on which it operates. The distinction between (1) and (2) thus seems to be more a difference in emphasis than a difference in kind.
With (1) and (2), then, the parts in question are both generated by the formal nature aiming at a specific goal, which allows us to say that those parts come to be for the sake of their functions (pace Leunissen, p. 95). With (3) the part in question is simply used by the mature organism for some useful function, but it did not come to be for that reason since its generation was driven entirely by material-level forces operating independently of teleological causation. This seems to warrant grouping (1) and (2) in opposition to (3) from the perspective of the developmental process itself.
To accommodate this, one might accept Leunissen's basic distinction between kinds of teleology but insist on a third kind of "tertiary" teleology. Like cases of secondary teleology, tertiary teleology would involve parts whose raw materials are present owing to material necessity alone. However, they differ from cases of secondary teleology in that the part itself also results from material necessity, whereas in secondary teleology (like primary teleology) the actual formation of the part from those materials is still governed by the goal-directed activities of the formal nature. Alternatively, one might agree with Leunissen that there are important differences in the way Aristotle understands teleological causation but deny that these can be captured by discrete and mutually exclusive categories. Instead (the objection goes) Aristotle sees those differences as a matter of degree so that any attempt to draw sharp divisions between "kinds of teleology" involves imposing artificial boundaries on something that is ultimately continuous.
Teleological Principles as Heuristic Devices
In addition to standard teleological explanations of the form "X is/comes to be for the sake of Y", Leunissen also considers Aristotle's use of various teleological principles, which she describes as "generalizations over the goal-directed actions of the formal nature (or soul) of an animal while engaged in animal generation" (p. 119). These include such principles as nature does nothing in vain (IA 2, 704b12-17), nature does everything either because it is (conditionally) necessary or because it is better (GA I 4, 717a15-16) and nature only provides weapons to those that can use them (PA III 1, 661b27-32). What is the epistemological status of such principles? How do they fit into Aristotle's broader philosophy of science? According to one view, such teleological principles function as explicit premises in biological demonstrations.  Against this Leunissen argues that their role is best characterized as heuristic. Such principles help point the natural scientist towards those causally relevant factors that are to be picked out in the ultimate explanations of phenomena -- explanations, moreover, whose premises will make no reference to those principles as causes (p. 112). Leunissen offers three reasons for why such principles cannot function as premises in demonstrations (pp. 122-3), though I shall leave it to the reader to assess the merits of her arguments.
I suspect that the right interpretation lies somewhere between these two views. There are definitely cases where Aristotle uses teleological principles heuristically. For example, in GA II 5 Aristotle asks why males exist in addition to females. To help resolve this puzzle Aristotle invokes the principle that nature does nothing in vain: since nature makes nothing in vain, males must make some contribution to generation. But the principle doesn't explain anything for it doesn't tell us what that contribution is. Instead, it simply prompts us to consider what it is that females are unable to supply by examining embryos that are generated parthenogenetically. (For another example see GA I 4, 717a11-21 and Leunissen's discussion on pp. 125-7.) But not all uses of teleological principles function in this way. Sometimes the fact captured by the principle is one of those causally relevant features that cannot be eliminated from the final account without crucial loss of explanatory content (e.g., IA VIII, 708a10-20 GA II 6, 744a34-744b1). If this is right, then Aristotle's teleological principles should not be seen as performing any single function in his natural science. Sometimes they are used as heuristic devices that help us find the causally relevant features to be cited in the ultimate explanation, and other times they capture basic facts about the world that are among those causally relevant features themselves, whether those facts alone provide the ultimate explanation so that no further facts are needed to explain the phenomenon in question or whether they simply form an ineliminable part of that ultimate explanation along with other causally relevant facts.
The Importance of Final Causes
Let me turn briefly to Leunissen's third thesis. One of the main questions raised by Aristotle's teleology is why he thinks natural science must have recourse to final causes at all. Why are final causes indispensable to the science of nature? Leunissen's position lies somewhere between the interpretation that says Aristotle's final causes play a mere heuristic role  and the interpretation that sees his commitment to final causes as stemming from a belief that natural phenomena cannot come to be by material necessity alone.  In contrast to the latter interpretation, Leunissen argues that Aristotle's attraction to teleology derives primarily from his belief that inquiring into final causes is the most effective method for acquiring scientific knowledge (p. 209). The functions and goals that constitute final causes are usually obvious to perception and as such provide the best starting points for discovering other causally relevant properties and changes related to the explanandum (p. 211). For Aristotle, those properties and changes are all equally opaque from a mechanistic point of view they only become salient when organisms and their parts are studied as teleologically organized wholes (see Resp. 3, 471b24-9). In this way Leunissen argues that the importance of final causes lies in their explanatory priority:
Through the investigation of natural phenomena from a teleological viewpoint, one is able to distinguish the causally relevant features of those phenomena, and thereby to discover the features that are to be included in the complete explanation of them. The identification of final causes thus helps to frame the search for material, formal and efficient causes of some phenomenon and thereby to find its complete causal explanation. (p. 211)
At the same time Leunissen is careful to distance her interpretation from the so-called Kantian reading that sees Aristotle's final causes as merely heuristic. On that reading, Aristotle thinks it is useful to look at nature as if it was governed by final causes, since adopting a teleological perspective helps to identify the real (i.e., material-efficient) causes of things. Since Aristotle thinks final causes have no ontological significance, he thinks natural science can dispense with them once the true causes have been found. Leunissen denies that this is Aristotle's view (p. 112). On her reading, Aristotle sees natural science as a search for the ultimate causes of natural phenomena, and these include final causes. Those final causes have real ontological force and constitute an ineliminable feature of Aristotle's world. Living things really are teleologically organized wholes whose generation is controlled by the goal-directed actions of their formal natures.
The back cover jacket describes the intended audience for this book as "those who are interested in Aristotle's natural science, his philosophy of science, and his biology". But given the significance of teleology, not only for Aristotle's own philosophy but for the history of philosophy in general, this book will be of interest to a much broader audience. While the reader is assumed to have some familiarity with Aristotle's philosophy of nature, Leunissen's discussion is quite accessible. Most technical concepts are explained and illustrated with examples, and she offers an abundance of textual evidence in support of her claims. The merits of Leunissen's book are by no means exhausted by the ideas I have discussed in this review. And my criticisms should in no way be taken as a negative assessment of its overall achievements. Leunissen has many important things to say about the positive role that material necessity plays in Aristotle's account of teleology, about Aristotle's famous defense of teleology in Physics II 8, how the doctrine of final causes is integrated into the theory of demonstration in Posterior Analytics II 11 and how this compares with Aristotle's actual practice of providing teleological explanations in the biological works, and what the limits of teleology are vis-à-vis Aristotle's understanding of cosmology. Readers may not agree with Leunissen's views at every turn, but there is certainly no shortage of philosophically engaging ideas in her book.
 James Lennox, "Nature Does Nothing in Vain", in J. Lennox (ed.), Aristotle's Philosophy of Biology: Studies in the Origins of Life Science, Cambridge University Press, 2001, pp. 205-224.
 Wolfgang Wieland, "The Problem of Teleology", in J. Barnes, M. Schofield, R. Sorabji (eds.), Articles on Aristotle, Duckworth Academic Press, 1975, pp. 141-160.
Carrying Capacity Examples
North American Deer Flourish
An example of a situation in which the carrying capacity of an environment was exceeded can be seen within the deer populations of North America.
After the widespread elimination of wolves – the natural predator of North American deer – the deer reproduced until their need for food exceeded the environment’s ability to regenerate their food. In many areas, this resulted in large numbers of deer starving until the deer population was severely reduced.
Deer, being a fairly large North American herbivore, were capable of eating leaves off of trees and shrubs, as well as low-growing plants like flowers and grass. And they required a lot of leaves to keep them going, as members of different species of deer could weigh anywhere from 50 to 1,500 pounds!
But when European settlers severely depleted the population of wolves, who they found to be a danger to human children and livestock, an unexpected consequence resulted: deer began to multiply out of control, until they exceeded the carrying capacity of their environment.
North American Deer Decline
As a result, deer began to starve. Plants species also began to suffer, some even being threatened with extinction as the starving deer ate all the green plants they could find.
When humans realized what was happening – and it began to affect their own food sources, after wild deer began to invade gardens and farms looking for crops to eat – they began to give nature a helping hand in reducing the deer population.
In modern times, some areas “cull” deer – a practice where deer are systematically hunted, not just for meat or sport, but to prevent deer starvation and damage to plants. Other areas have even begun to re-introduce wolves, and these areas have seen healthier ecosystems, gardens, and crops as a result.
The story of the North American wolves and deer has acted as a cautionary tale for people considering making changes of any kind to their natural environment, which might have unintended consequences.
The Daisyworld Model
The hypothetical “Daisyworld” model is a model developed by scientists to study how organisms change their environment, and how ecosystems self-regulate.
In the original “Daisyworld” mathematical simulation, there were only two types of life forms: black daisies, which increase the environment’s temperature by absorbing heat from the Sun (this is a real property of black materials), and white daisies, which decrease the environment’s temperature by reflecting the Sun’s heat (this is also a real effect of white-colored materials).
Each species of daisies had to live in a proper balance with the other species. If the white daisies overpopulated, the world would become too cold. Daisies of both types would begin to die off, and the world would start to regain equilibrium. The same held true for black daisies: if they become overpopulated, the world becomes warmer and warmer until the daisies began to die off again.
Real-life ecosystems are much more complicated than this, of course.
Each organism has many needs, and how well the environment can meet those needs might depend on what other organisms it shares the environment with.
Humans Change the Carrying Capacity
Humans have become one of the world’s only global species my mastering technology. Time and time again, the human species has overcome a factor, such as availability of food or the presence of natural predators, that limited our population.
The first major human population explosion happened after the invention of agriculture, in which humans learned that we could grow large numbers of our most nutritious food plants by saving seeds to plant in the ground. By making sure those seeds got enough water and were protected from competition from weeds and from being eaten by other animals we insured a steady food supply.
When agriculture was invented, the human population skyrocketed – scientists think that without agriculture, between 1 million and 15 million humans were able to live on Earth. Today, there are about 1 million humans in the city of Chicago alone!
By the Middle Ages, when well-organized agriculture had emerged on every continent, there were about 450 million – or about half a billion – humans on earth.
Putting Technology to Work
A new revolution in Earth’s capacity to carry humans began in the 18th and 19th centuries when humans began to apply advanced and automated technology to agriculture. The use of inventions such as the mechanical corn picker and crop rotation – a way of growing different crops in a sequence that enriches the soil and leads to higher yields – allowed humans to produce even more food. As a result, the world population tripled from about half a billion to 1.5 billion people.
In the twentieth century, a third revolution occurred when humans began to learn how to rewrite the genomes of the plants, using viruses to insert new genes into seeds directly instead of relying on selective breeding and random mutation to increase crop yields. The result was another drastic increase in the Earth’s ability to produce food for humans.
During the 20th century, Earth’s human population more than quadrupled, from 1.5 billion to 6.1 billion. We’ve come a long way from the pre-agricultural days!
But some scientists worry that we may be well on our way to exceeding the Earth’s carrying capacity – or that we may have already done so.
What is the Human Carrying Capacity?
Though we have massively expanded the carrying capacity for the human species, our activities are not without consequence. There are several possible limitations on the human species that not even technology can save us from.
Scientists point to the rapid decline of bee populations – which are necessary to pollinate some of our crops, and which many scientists believe are being killed by pesticides we use to protect those same crops – as evidence that our current food production practices may not be sustainable for much longer.
The proliferation of poisonous algae, which can poison our water supplies and which feeds on the same fertilizer we use to feed our crops, is another worrisome sign that we may be exceeding our carrying capacity, and may begin to cause problems for ourselves if our population continues to grow.
Some scientists fear that humans may exceed the Earth’s carrying capacity for humans, and encourage the use of contraception to decrease birth rates in order to prevent human populations from exhausting their sources of food and other vital resources.
Where To Observe Fractals In Nature:
Walking through a forest, you will find fractal patterns in the network-like branching patterns everywhere among the ferns, trees, roots, leaves, and the fungal mycelium in the soil.
You will also find them throughout the natural world in the patterns of streams, rivers, coastlines, mountains, waves, waterfalls and water droplets.
Here are some examples of fractal patterns in nature:
Trees are perfect examples of fractals in nature. You will find fractals at every level of the forest ecosystem from seeds and pinecones, to branches and leaves, and to the self-similar replication of trees, ferns, and plants throughout the ecosystem.
2. River Deltas
This aerial footage from NASA of the Ayeyarwady River Delta (also referred to as Irrawaddy) in Myanmar is a great example of the fractal branching patterns of river delta ecosystems.
3. Growth Spirals
You will also find fractal patterns in growth spirals, which follow a Fibonacci Sequence (also referred to as the Golden Spiral) and can be seen as a special case of self-similarity.
Observe the self-replicating patterns of how flowers bloom to attract bees. Gardens are amazing places to explore the fractal nature of growth.
5. Romanesco Broccoli
You won’t find it in the forest, but this edible flower bud of the species Brassica oleracea (broccoli) from Italy is a wholesome and delicious example of fractal geometry.
These arrangements have explanations at different levels – mathematics, physics, chemistry, biology. Here’s what Wikipedia has to say about what the sciences have observed about these patterns in nature:
“From the point of view of physics, spirals are lowest-energy configurations which emerge spontaneously through self-organizing processes in dynamic systems. From the point of view of chemistry, a spiral can be generated by a reaction-diffusion process, involving both activation and inhibition. Phyllotaxis is controlled by proteins that manipulate the concentration of the plant hormone auxin, which activates meristem growth, alongside other mechanisms to control the relative angle of buds around the stem. From a biological perspective, arranging leaves as far apart as possible in any given space is favored by natural selection as it maximizes access to resources, especially sunlight for photosynthesis.”
Fractals are hyper-efficient in their construction and this allows plants to maximize their exposure to sunlight and also efficiently transport nutritious throughout their cellular structure. These fractal patterns of growth have a mathematical, as well as physical, beauty.
2 Materials and methods
The schema of Cryfa is demonstrated in Supplementary Figure S1. For the purpose of encrypting and compacting a Fastq file by Cryfa, it is first split into headers, bases and quality scores. Similarly, a Fasta file is split into headers and bases. In the next step, packing of these split segments is performed in different fixed-size blocks, in a way that each block maps a tuple of symbols into an ASCII character. The number of symbols considered for each tuple can be different for headers, bases and quality scores. The next step is employing a key file, containing a password, to shuffle the packed content that is obtained by joining the outputs of different packing blocks. Supplementary Note S4 provides with a guideline for making the key file, which can be carried out by the ‘keygen’ tool that we have provided alongside the Cryfa tool. As the result of shuffling, the content becomes uniformly permuted and transformed into pseudo high-data complexity hence, it becomes resistant against low data complexity and KPA attacks. In the final step, an authenticated encryption, which simultaneously provides data confidentiality and integrity, is carried out on the shuffled content, by the AES method in Galois/counter mode (GCM). The output of this final step is an encrypted and compact Fasta/Fastq file.
In order to decrypt and unpack a file, it is first decrypted by the AES method in GCM mode. Then, the decrypted content is unshuffled using the key file that is restored to order from the shuffled state. Note that the key file used in this phase needs to be the same as the one used for shuffling. Finally, the unshuffled content is unpacked using a lookup table, and the decrypted and unpacked file is obtained. This file is the same as the original Fasta/Fastq file which had been encrypted and compacted, due to the lossless nature of the Cryfa tool.
Cryfa is capable of preserving the privacy of any genomic data in Fasta, Fastq, VCF, SAM and BAM formats. In this way, if a genomic file, e.g. in VCF format, is passed to Cryfa, it can be efficiently shuffled and encrypted. Supplementary Note S1 describes the methods in greater detail.
Secure communication can be provided using techniques, in the presence of malicious third-party content called adversaries. These techniques can be referred to as Cryptography. Any private messages can be hidden from the public or any third parties using a set of protocols. These protocols need to be analyzed and constructed in an efficient manner in order to maintain the secrecy of message being sent. Modern Cryptography has a certain aspect that is central to it, like data integrity, authentication, confidentiality etc. In the modern world, Cryptography heavily relies upon subjects like mathematics and computer science. Algorithms for Cryptography are designed in such a way that they are hard to crack in practice by any malicious third party, also known as adversaries. A practical approach toward cracking such an algorithm would fail however, the theoretical approach may possibly crack such a system. Thus, any algorithm can be cited as secure if its key properties cannot be deduced with a given ciphertext. Cryptography can be categorized into two branches: Symmetric and Asymmetric. With the symmetric approach, a single key is utilized for the encryption and decryption process, i.e. sender and receiver should have a shared key. However, with this approach, the distribution of key was a weak link, which gives rise to adopt a novel approach. In an asymmetric version of cryptography, sender and receiver have two keys, public and private. A private key is kept as a secret, whereas the public key is exposed to the outer world. Any set of data, which is encrypted with a public key, can only be decrypted using a corresponding private key. When it comes to comparison, the symmetric approach is faster than the asymmetric one: for example – a digital signature utilized asymmetric cryptography to encrypt messages in hashes instead of a complete message.
Encryption is one of the component of Cryptography, which is the most effective and popular data security technique. The encryption process involves transforming the data into another form, known as ciphertext, whereas the original data to be encrypted is known as plaintext. The plaintext is supplied to an algorithm and an encryption key, which create a ciphertext. This ciphertext can be decrypted with a valid key. Data which is stored on the computer need to transferred using internet or computer network. While sending the data across a network, the integrity or security of digital data needs to be maintained encryption plays a key role in providing data integrity. There are some core securities features that need to be maintained: data integrity, authentication, and non-repudiation. Authentication means the data’s origin needs to be verified. Data integrity would ensure that content is not altered since it was being sent. And, non-repudiation would ensure the sender cannot refuse about sending the message. An encryption process is serving these primary security aspects. Like Cryptography, Encryption has two modes: symmetric and asymmetric. The same secret key is shared between the sender and receiver while performing encryption and decryption. The asymmetric approach, on the other hand, uses two different keys, public and private. Encryption technique is common among the usage of protecting information with civilian system, by governments and military. Customer’s personal and banking related data is highly prone to theft encrypting such files is always a boon in case of the security system fails to protect the confidential data. Encryption at first may seem like a complicated approach, but various data loss prevention software handles it efficiently.
Web development, programming languages, Software testing & others
Head To Head Comparison Between Cryptography and Encryption (Infographics)
Below is the top 6 difference between Cryptography and Encryption
Key Differences Between Cryptography and Encryption
Both are popular choices in the market let us discuss some of the major difference:
- Cryptography is the study of concepts like Encryption, decryption, used to provide secure communication, whereas encryption is the process of encoding a message with an algorithm.
- Cryptography can be considered a field of study, which encompasses many techniques and technologies, whereas Encryption is more of mathematical and algorithmic in nature.
- Cryptography, being a field of study, has broader categories and ranges encryption is one such technique. Encryption is one of the aspects of Cryptography that can efficiently encode the communication process.
- Cryptography is more generic in nature uses digital signature and another mode of techniques to provide security for digital data, whereas Encryption is being utilized with a set of algorithms widely known as a cipher to encrypt the digital data.
- Cryptography has a symmetric and asymmetric version, with a concept of a shared and non-shared key, whereas Encryption follows the same approach with some specific terms like ciphertext, plaintext, and cipher.
- Cryptography involves working with algorithms with basic cryptographic properties, whereas Encryption is one of the subsets of Cryptography that uses mathematical algorithms called cipher.
- Cryptography has its application which is wide and ranging from digital data to classical cryptography, whereas Encryption is utilized to encode the data in transit over a computer network.
- Cryptography’s fields include computer programming, algorithm, mathematics, information theory, transmission technology, whereas Encryption is more of digitalized in nature since the modern era.
- Cryptography involves two major components called Encryption and Decryption, whereas Encryption is a process of safeguarding information to prevent unauthorized and illegal usage.
- Cryptography act as a superset of Encryption, i.e. every process and terms used for Encryption can be said to be a part of Cryptography, whereas Encryption being a subset, has its own specific terms and processes.
Cryptography vs Encryption Comparison Table
Let us discuss the comparison between Cryptography vs Encryption are as follows:
Cryptography involves various techniques and technologies, including algorithms, mathematics, information theories, transmission, encryption etc. Encryption is one such technique of Cryptography. A standalone, Encryption process can confidentially provide the message, but at the same time, other techniques and strategies are required to provide the integrity and authenticity of a message. So, in a nutshell, a successful scheme should provide data integrity, authentication, and non-repudiation, which is what Cryptography provides.
Encryption is provided in two forms, symmetric and asymmetric. Symmetric involves a single shared key among sender and receiver. Asymmetric, on the other hand, involves two public and private keys, one for a sender and the other for a receiver. Thus, a user can choose among any two forms. Public key cryptography is used to implement a lot of schemes like a digital signature. Various software is based on public-key algorithms, which are crucial in today’s world to provide digital data safe and reliable. One can say, cryptography vs encryption like techniques are the basis of a secure and reliable digital data mechanism. Internet & the digital world won’t survive without these two pillars of safety.
This has been a guide to the top difference between Cryptography vs Encryption. Here we also discuss the key differences with infographics and comparison table. You may also have a look at the following articles to learn more –