Malleability is a Measure of How Easy It is to Bend and Shape a Metal
Malleability
Noble Metals (Chemistry)
Hubert Schmidbaur , John L. Cihonski , in Encyclopedia of Physical Science and Technology (Third Edition), 2003
I.C.1 Gold
Gold is known mainly for its color, electrical conductivity, ductility, and corrosion resistance. The malleability of gold is the highest of all metals; 1.0 troy ounce can be spread to approximately 300 ft2 of foil. Gold by itself is not strong enough for many applications and must be alloyed. It is resistant to oxygen, sulfur, selenium, and most reagents, even in the presence of air. It will react with tellurium at approximately 475 °C and with the halogens in the presence of moisture. Dry chlorine above 80 °C is corrosive to gold as are fluorine above 310 °C and iodine above 480 °C. The metal will react with aqua regia, hot H2SO4, or cyanide and alkali in the presence of an oxidizing agent and HCl/Cl2, and arsenic and phosphorous acids. Gold has the highest atomic weight and ionization potential of the noble metals and the second lowest boiling point. It has one naturally occurring isotope, 197Au, and 26 unstable ones, of which the most often used is 198Au, with a half-life of 2.7 days. Gold and its alloys are available in many forms, such as ingot, wire, tubing, sheet, ribbon, sponge, and powder.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122274105004816
Lead Alloys: Alloying, Properties, and Applications
J.F. Smith , in Encyclopedia of Materials: Science and Technology, 2001
1 Mechanical or Structural Properties
The mechanical or structural properties of lead and lead alloys are relatively poor compared to other metals. Lead offers useful properties such as good corrosion resistance, malleability, energy absorption and electrical conductivity. However, it is often used in conjunction with other, structurally superior, materials to produce an effect that neither could achieve separately ( Lead Industries Association 1984). For example, sheet lead is bonded to steel for chemical tank linings, noise and radiation shielding or as a lining for chemical or nuclear facility piping. Lead alloy solder applied to copper or steel is used in roofing applications. Physical constants of lead are shown in Table 2.
Crystal structure | Face-centered cubic |
---|---|
Lattice constant, side (nm) | 0.49445 |
Closest approach of atoms (nm) | 0.35000 |
Melting point (°C) @ 1.01 N mm−2 | 327.4 |
Density, cast solid (g cm−3, 20 °C) | 11.34 |
Density, just solid @ 327.4 °C (g cm−3) | 11.005 |
Density, just liquid @ 327.4 °C (g cm−3) | 10.686 |
Vapor pressure @ 1000 °C (N mm−2) | 1.77 |
Boiling point (pressure @ 1 bar) (°C) | 1725 |
Specific heat 0 to 100 °C (kJ kg−1 K−1) | 0.1310 |
Magnetic susceptibility (per g) (18–330 °C) | −0.12×10−6 |
Latent heat of fusion (kJ kg−1) | 23.40 |
Coefficient of expansion (20–100 °C) (10−6 K−1) | 29.1 |
Solidification shrinkage (%) | 3.85 |
Recrystallization temperature (°C) | −59 |
Electrical resistivity @ 20 °C (μΩ cm) | 20.648 |
Atomic number | 82 |
Atomic weight | 207.22 |
Lead does not dissolve in dilute acids except in the presence of an ample supply of oxygen, owing in part to the fact that hydrogen is evolved on lead only at a considerable overvoltage. Lead is usually protected from solution by the formation of an insoluble coating on the surface, which protects the underlying metal. The stability of lead in sulfuric acid concentrations from 10% to better than 95%, at temperature ranges from room to 150 °C, is critical for its use in storage batteries and for numerous chemical manufacturing applications. Lead is practically inert to most commercial acids except nitric, which, because it is a strong oxidizer, rapidly attacks lead, especially at concentrations below 50%. Above that level, to 95%, the effect is minimal (Lead Industries Association 1986).
Some mechanical properties are listed in Table 3.
Rolled sheet | Extruded strip | |
---|---|---|
Tensile strength (MPa) | 16.9 | 15.2 |
Elongation at break (%) | 57 | 48 |
Fatigue resistance (endurance limit at 10×10 stress cycles, MPa) | 5.0 |
Creep (% h−1) at stress | Room temperature | 65 °C |
---|---|---|
1.38 MPa | 0.4×10−5 | 6×10−4 |
2.07 MPa | 1.5×10−5 | 50×10−4 |
2.76 MPa | 3.0×10−5 | 230×10−4 |
- a
- ASTM B29–92 (see footnote e , Table 1).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0080431526007786
The Hückel Molecular Orbital Method
J.E. House , in Fundamentals of Quantum Mechanics (Third Edition), 2018
9.8 Band Theory of Metals
One view of a solid metal is that it is composed of a collection of ions having positive charges around which is the collection of mobile electrons. When viewed in this way, the properties of electrical conductivity and malleability of metals follow logically. Alloys consist of solid solutions of two or more metals, so substitution of one type of ion for another would result in an alloy. It is well known that when metals are combined with hydrogen (hydrides), oxygen (oxides), nitrogen (nitrides), or carbon (carbides), the metallic properties mentioned above are greatly diminished.
Even though some electrons may be mobile, it is largely those in the valence shell that are capable of being moved. The most common structures of metal lattices are cubic closest packed and hexagonal closest packed. In these structures, each metal atom (except those on a surface or edge) has 12 nearest neighbors. Bonding between metal atoms consists of a sharing of the electrons, which are mobile as a result of their being held in conduction bands. The result is that although metals are malleable, there is generally a high cohesion and strength of the lattices.
Even though metal atoms may not move through a solid lattice, there is motion within the lattice. There is some vibration of the lattice members about the average positions. The resistivity of a metal results from the fact that metal atoms hinders the motion of the electrons through the lattice. When a solid is heated, the vibration of the metal atoms becomes more extensive. This increase in vibrational amplitude leads to greater impedance to the flow of electrons, so the resistivity of the metal increases with temperature. Conductivity is the reciprocal of resistivity, so the conductivity of the metal decreases with an increase in temperature.
The molecular orbital approach is a suitable way to describe bonding in many systems. When applied to bonding in a metal, it is assumed that the atomic orbitals of metal atoms combine to form molecular orbitals that extend over a larger number of atoms. At least for atoms in the interior of the metal, each atom can be presumed to contribute an orbital for the combinations. As shown in Section 9.1, when two atomic orbitals combine, then two molecular orbitals result, with one designated as a bonding orbital and the other an antibonding orbital. The interaction of three atoms leads to the formation of three molecular orbitals (see Section 9.4). From the molecular orbital energy diagrams shown earlier for ethylene, allyl, and butadiene molecules, the calculations described for those molecules indicate that as the number of molecular orbitals increases, the difference in energy between them decreases. Figure 9.13 illustrates this principle as the number of atoms increases to Avogadro's number N.
In the Hückel approach, the difference in energy between adjacent molecular orbitals decreases as the number of orbitals increases to the point where the term band is applied. Thermal energy as a function of temperature can be represented by kT, where k is Boltzmann's constant and T is the temperature (K). For a collection of molecular orbitals that contains a large number of atoms, the separation between adjacent orbitals is smaller than kT. As a result, numerous closely spaced molecular orbitals may be populated to form a conduction band. Because vibrations of metal atoms in a lattice increase with temperature, the conductivity of metals decreases as the temperature increases as a result of reduced electron mobility.
For a set of molecular orbitals in the Hückel approximation, the energy of the nth orbital is given by
(9.105)
In this equation, α represents a Coulomb integral (H ii), β is a resonance integral (H ij), and N approaches infinity for a metal. As a result, there are N energy levels that comprise an energy band with an overall width approaching 4β. Within this band, the difference in energy between n and n + 1 levels approaches zero and N increases. This description of the energy levels in a metal gives rise to the band theory descriptor, which is illustrated in Fig. 9.13.
If the metal being considered is presumed to be sodium (as is the case shown in Fig. 9.13), the electron configuration is 3s 1, so the set of bonding orbitals constructed from these orbitals is only half filled. It is sometimes considered that all of the sodium orbitals can form bands, but the gaps between the bands limit electrical conductance to the upper band. For purposes of illustration, it will be assumed that the core electrons that reside in the 1s and 2s orbitals are not involved in bonding. The resulting band structure can be illustrated, as shown in Fig. 9.14. When N atoms are considered, electron population of the bands can be expressed as 2(2 l + 1)N, in which l is the orbital quantum number. Both the 2p and 3s orbitals interact to form bands because both types of orbitals are occupied, and therefore a band is formed from the combination of each type of orbital.
For sodium, the band of highest energy is only half filled because each sodium atom has a single electron in the 3s orbital. As a result, it is possible for electrons to enter and leave the band. This action can be caused by light in the visible region, which gives rise to an absorption and emission process at the surface of the metal. This process causes the metal to have the shiny appearance known as metallic luster. With electrons being able to move in a partially filled band, the metal is a good conductor of electricity.
Materials that behave as semiconductors or insulators have greater energy differences between the bands (sometimes referred to as the band gap). In the case of insulators, the movement of electrons requires energies that are of a magnitude similar to the binding energies of electrons in atoms (up to 10–12 eV). Semiconductors have band gaps that typically range from 1–2.5 eV. For example, some representative values are as follows: Ge, 0.67; Si, 1.14; and CdS, 2.42 eV (Serway and Jewett, 2014). At room temperature, thermal energy per mole is RT or about 2.49 kJ mol−1 = 596 cal mol−1 = 0.596 kcal mol−1. This is equivalent to 0.026 eV molecule−1 so increasing the temperature increases the number of electrons that can populate orbitals of higher energy, as is the case with a semiconductor (see Section 6.7). For a much more complete discussion of semiconductors, see the books of Serway and Jewett (2014) and Blinder (2004).
The HMO theory is adequate to deal with many significant problems in molecular structure and reactivity. In view of its gross approximations and very simplistic approach, it is surprising how many qualitative aspects of molecular structure and reactivity can be dealt with using the Hückel approach. For a more complete discussion of this topic, consult the references listed at the end of this book. As was stated at the beginning of this chapter "In spite of its approximate nature, Hückel molecular orbital (HMO) theory has proved itself extremely useful in elucidating problems concerned with electronic structures of π-electron systems" (Purins and Karplus, 1968). It is still true.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128092422000097
Technology Assessment: Concepts and Methods
Armin Grunwald , in Philosophy of Technology and Engineering Sciences, 2009
2.1 The historical origins of technology assessment
TA arose from specific historical circumstances in the 1960s and 1970s. The US congressional representative Daddario is now held to be the coiner of the term and of the basic theory underlying TA [Bimber, 1996], which culminated in the creation of the Office of Technology Assessment (OTA) at Congress in 1972 [United States Senate, 1972]. The concrete background consisted in the asymmetrical access to technically and politically relevant information between the USA's legislative and executive bodies. While the executive, thanks to the official apparatus at its command, was able to draw on practically any amount of information, parliament lagged far behind. This asymmetry was deemed to endanger the — highly important — balance of power between the legislative and the executive facets of technology-related issues. From this point of view the aim of legislative TA was to restore parity [Bimber, 1996].
Parallel to this very specific development, radical changes were taking place in intellectual and historical respects, which were to prove pivotal to TA. First and foremost, the optimistic belief in scientific and technical progress, which had predominated in the post Second World War period, came under pressure. The ambivalence of technology was a central theme in both the Critical Theory of the Frankfurt School (Marcuse, Habermas) and in the Western "bourgeois" criticism of technology (Freyer, Schelsky) with its dialectical view of technological progress: "the liberating force of technology — the instrumentalisation of things — turns into a fetter of liberation; the instrumentalisation of man" [Marcuse, 1966, p. 159].
At the same time, broad segments of Western society were deeply unsettled by the "Limits of Growth" [Meadows et al., 1972] which, for the first time, addressed the grave environmental problems perceived as a side effect of technology and technicisation, and by discussions on technical inventions in the military setting forecasting the possibility of a nuclear attack that would put an end to humanity. The optimistic pro-progress assumption that whatever was scientifically and technically new would definitely benefit the individual and society was questioned. As of the 1960s deepened insight into technological ambivalence led to a crisis of orientation in the way society dealt with science and technology. Without this crisis surrounding the optimistic belief in progress, TA would presumably never have developed or, more precisely, would never have extended beyond the modest confines of the above-mentioned US Congressional office.
Furthermore, the legitimization problems linked to technologically relevant decisions have been crucial to the genesis of TA. Problems with side effects, the finite-ness of resources and new ethical questions have all heightened decision-making complexity and have led to societal conflicts on the legitimacy of technology. The planning and decision-making procedures developed as early as the 1950s in the spirit of planning optimism [Camhis, 1979] turned out to be clearly unsuited to solving this problem. In addition, the technocratic and expertocratic character of these procedures became an issue in a society in which the populace and the media was starting to monitor democracy and transparency more closely [van Gunsteren, 1976]. Demands for a deliberative democracy [Barber, 1984] led to a climate in which it was particularly the critical aspects of scientific and technical progress that started being debated in the public arena.
The move away from metaphysical and philosophical assumptions about technology also instigated the emergence of TA, a field that focuses on the criteria and means underscoring the concrete development of technology in concrete historical contexts, the conditions facilitating the malleability of technology in society, and the relevant constraints. In the post-metaphysical world [ Habermas, 1988a], it is no longer a matter of humanity's technology-driven liberation from work constraints (Marx, Bloch) or of humanity's "salvation" thanks to engineering intervention (Dessauer), neither is it an issue of man's deplored "one-dimensionality" in a technicised world (Marcuse), of the "antiquatedness of man" in sharp contrast to the technology he has developed (Anders) or of fears of a technologically-induced end to human history [Jonas, 1979]. It is more about the impact of technology and the concrete design of specific technical innovations, for instance, in transportation, in information technology, in space flight and in medicine. TA does not concern itself with technology as such but rather with concrete technical products, processes, services, systems, and with their societal impacts and relevant general settings. 1 These developments are reflected in the philosophy of technology where more emphasis is placed on empirical research [Kroes and Meijers, 1995].
The problems mentioned at the outset about the effect that parliamentary decision-making has on technology only give the "occasion" for the initiation of legislative TA facilities, not the deeper reasons for TA formulation which are rooted in the experience of ambivalence towards technical progress, in problems surrounding technological legitimacy in a society with increasing demands for participation, and in the need to concretise and contextualise technology evaluation in complex decision-making situations. The occurrence of TA is thus one of the very specific descriptors rendering our historical situation one that may be dubbed "reflective modernity" [Beck et al., 1996].
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444516671500446
MINERALS | Definition and Classification
E.H. Nickel , in Encyclopedia of Geology, 2005
Historical Background
The ancient classification of minerals was based mainly on their practical uses, minerals being classified as gemstones, pigments, ores, etc. Probably the earliest classification based on external characteristics and on some physical properties, such as colour, fusibility, malleability, and fracture, was that of Geber (Jabir Ibn Hayyaan, 721–803), later extended by Avicenna (Ibn Sina, 980–1037), Agricola (1494–1555), and AG Werner (1749–1817). This system was substantially refined by F Mohs (1773–1839) in his Natural-History System of Mineralogy (1820). With Werner, physical classification attained its maturity, and was in general use, by the end of the eighteenth century. Linnaeus (1707–78) attempted to classify minerals primarily by their external morphology, with a hierarchical system involving subdivision into genus, order, and class. A purely chemical classification was proposed by T Bergmann (1735–84), but this approach was premature, because many chemical elements had not been discovered at that time and analytical procedures were in their infancy. AF Cronstedt (1722–65) seems to have been the first to devise a classification scheme involving both chemical and physical properties, with chemistry predominating. Systematic crystallography was initiated by JBLR de l'Isle (1736–90), and this concept was applied by RJ Haüy (1743–1822) in Traité de Minéralogie (1801), in which he presented a mineral classification scheme based on the 'nature of metals', or, as it would be expressed now, the nature of cations.
With advances in chemistry, chemical properties became increasingly important, and a chemical classification of minerals was proposed in 1819 by JJ Berzelius (1779–1848). He recognized that minerals with the same non-metal (anion or anionic group) have similar chemical properties and resemble each other far more than do minerals with a common metal. He considered minerals as salts of anions and anionic complexes, namely, as chlorides, sulphates, silicates, etc., rather than as minerals of Zn, Cu, etc. At this time, CS Weiss (1780–1856) introduced the seven crystal systems (1815) and Mitscherlich discovered isomorphy (1819) and polymorphy (1824). Gustav Rose (1798–1873) combined chemistry, isomorphy, and morphology to produce a chemical–morphological mineral system, and this was further developed by P von Groth (1843–1927) in his five editions of Tabellarische Übersicht der Mineralien nach ihrer Kristallographisch-chemischen Beziehungen. JD Dana (1813–95), in his System of Mineralogy (1837), based his classification system primarily on chemistry, and this emphasis has been maintained throughout subsequent editions of the System.
After 1912, following the discovery of the phenomenon of X-ray diffraction by crystals by M von Laue (1879–1960) and WH Bragg (1862–1942), and the elucidation of the first crystal structure (the mineral halite) in 1914 by WL Bragg (1890–1971), the crystal structures of minerals began to be taken into account in mineral classification schemes. The first classification of this type, which took into account the distribution of interatomic bonds in a structure, involved the structures of silicates, determined by Machatschki in 1928. This new field was rapidly expanded by WL Bragg in The Crystalline State (1933) and in the first edition of Atomic Structures of Minerals (1937). This combination of chemistry and structure in mineral classification was subsequently applied to many other categories of minerals, such as fluoraluminates (by Pabst; 1950), aluminosilicates (by Liebau; 1956), silicates and other minerals with tetrahedral complexes (by Zoltai; 1960), phosphates (by Liebau, in 1966, and by Corbridge, in 1971), sulphosalts (by Makovicky; 1981 and 1993), and borates (by Heller, in 1970, and by Strunz, in 1997).
The classification of silicates on the basis of polymerization of corner-sharing SiO4 tetrahedra from insular groups to dimers, chains, rings, sheets, and frameworks proved to be a particularly useful scheme but, with the exception of the borates, this concept could not be comprehensively extended to other categories of minerals. The polymerization of cation-centred polyhedra by sharing corners, edges, and faces to form various configurations has been applied in the classification of some minerals, such as sulphates (by Sabelli and Trosti-Ferroni; 1985), copper oxysalts (by Hawthorne; 1993), and phosphates (by Hawthorne; 1998), but such schemes have only limited applicability to minerals as a whole. Other classification schemes, such as those stressing genetic aspects of mineral formation (by Kostov; 1975) or interatomic bonding (by Godovikov; 1997) have not been generally adopted by the mineralogical community.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0123693969002598
Nanodrug delivery systems for transdermal drug delivery
Irina Gheorghe , ... Mariana Carmen Chifiriuc , in Nanomaterials for Drug Delivery and Therapy, 2019
8.2.5 Ethosomes
Ethosomes are lipid vesicles (containing phospholipids, alcohol: 20%–45% and water) very similar to liposomes but they have a high alcohol content, which makes them more flexible and able to be deformed when more pressure is applied. Therefore, their main characteristics are malleability and softness ( Madsen et al., 2010; Pannala and Samala, 2012; Lu et al., 2011). These nanocarriers allow drugs to reach deeper skin layers and thereby reach systemic circulation.
Ethosomes are enhancing the drug delivery across the skin by two mechanisms: a fluidizing effect of ethanol on the phospholipid bilayers will create a deformable vesicle, and the SC lipid disruption by ethanol will permit the access of ethosomes and their associated drugs into the deeper skin layers (Touitou et al., 2000).
Ethosomes are able to contain and deliver a wide range of molecules because they can transport highly lipophilic drugs. The transdermal penetration of testosterone from an ethosomal patch was significantly enhanced compared with a commercial patch (Ainbinder and Touitou, 2005). Ethosomes have revealed improvements in topical and transdermal drug delivery. Dayan and Touitou suggested that ethosomal formulations containing trihexyphenidyl HCl (THP) represented a very promising tool for the transdermal delivery of THP as compared with classical liposomes, with the ethosomes showing higher encapsulation efficiency and a greater ability to deliver the THP to the deeper layers of the skin (Dayan and Touitou, 2000). Paolino et al. have shown that the pretreatment of the skin with an ethosomal formulation containing ammonium glycyrrhizinate reduced the intensity and duration of methyl nicotinate-induced erythema in healthy human volunteers, compared with hydroalcoholic solutions of the drug (Paolino et al., 2005). Similarly, Godin et al. demonstrated that ethosomal erythromycin has shown an improved in vivo antibacterial action in comparison to a hydroethanolic solution of the drug (Godin et al., 2005). Another study demonstrated that acyclovir loaded in ethosomes has shown improved therapeutic efficiency, compared with the normal treatment as a result of the improved penetration of the active from the lipid–ethanol vehicle (Horwitz et al., 1999).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128165058000102
Optical Nanofibers
Pablo Solano , ... Steven L. Rolston , in Advances In Atomic, Molecular, and Optical Physics, 2017
1.2 Nanofiber Platform
Before embarking on a thorough discussion of the nanofiber platform it is important to point out the advantages over other nanophotonic structures that we see.
Nanofibers can be produced in-house, using a heat-and-pull method (Hoffman et al., 2014; Ward et al., 2014) . The glass malleability ensures low surface roughness. The smoothness of the surface is a great asset since it leads to ultra-high transmission structures that can withstand high optical powers (almost one Watt in vacuum ( Hoffman et al., 2014)) without permanent damage to the fiber or degradation of the transmission.
Optical fibers also show great versatility in terms of connectivity to other systems. The advanced state of fiber optic technology is an enormous advantage to pursuing quantum information devices on this platform as they facilitate the interaction and communication among different modes and modular devices as stated by Kimble (2008). Kurizki et al. (2015) and Xiang et al. (2013) review the growing area of hybrid quantum systems that are increasing in importance in quantum optics and quantum information. They combine different kinds of systems—e.g., atomic, Rydberg, ionic, photonic, condensed matter—to utilize the best coherence available for different tasks: processing or memory. Hoffman et al. (2011) and Kim et al. (2011) propose the use of a trapped atoms around an ONF to couple them through their magnetic dipole them to a superconducting circuit in a cryogenic environment. Hafezi et al. (2012) study the atomic interface between microwave and optical photons in such a system.
One of the most fascinating developments is the use of ONFs in quantum optics for the study of chiral quantum optics (Lodahl et al., 2017) and its connections with many-body physics. ONFs indeed provide a unique platform to study this nascent area.
One difficulty with nanofibers comes from the polarization structure of the modes and its limited control along the waist length. However, they have not been a major drawback for experiments. Also, the values of atom coupling to the nanofiber currently do not reach those recently seen in nanophotonic devices (Goban et al., 2015; Hood et al., 2016; Yu et al., 2014), but the entire parameter space for traps has yet to be explored, and improvements may be possible.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S1049250X1730006X
Philosophy of Computing and Information Technology
Philip Brey , Johnny Hartz Søraker , in Philosophy of Technology and Engineering Sciences, 2009
Foundational issues in computer ethics
Foundational, metaethical and methodological issues have received considerable attention in computer ethics. Many of these issues have been discussed in the context of the so-called foundationalist debate in computer ethics [Floridi and Sanders, 2002; Himma, 2007a]. This is an ongoing metatheoretical debate on the nature and justification of computer ethics and its relation to metaethical theories. Three central questions are: "Is computer ethics a legitimate field of applied ethics?", "Does computer ethics raise any ethical issues that are new or unique?" and "Does computer ethics require substantially new ethical theories, concepts or methodologies different from those used elsewhere in applied ethics?".
The first question, whether computer ethics is a legitimate field of applied ethics, has often been discussed in the context of the other two questions, with discussants arguing that the legitimacy of computer ethics depends on the existence of unique ethical issues or questions in relation to computer technology. The debate on whether such issues exist has been called the uniqueness debate [Tavani, 2002]. Maner [1996] has argued that unique features of computer systems, like logical malleability, superhuman complexity and the ability to make exact copies, raise unique ethical issues to which no non-computer analogues exist. Others remain unconvinced that any computer ethics issue is genuinely unique. Johnson [2003] has proposed that issues in computer ethics are familiar in that they involve traditional ethical concepts and principles like privacy, responsibility, harm and ownership. The application of these concepts and principles is however not straightforward because of special properties of computer technology, which require a rethinking and retooling of ethical notions and new ways of applying them.
Floridi and Sanders [2002; Floridi, 2003] do not propose the existence of unique ethical issues but rather argue for the need of new ethical theory. They argue that computer ethics needs a metaethical and macrotheoretical foundation that differs from the standard macroethical theories like utilitarianism and Kantianism. They propose a macroethical theory they call Information Ethics, which assigns intrinsic value to information. The theory covers not just digital or analogue information, but in fact analyzes all of reality as having an informational ontology, being built out of informational objects. Since informational objects are postulated to have intrinsic value, moral consideration should be given to them, including the informational objects produced by computers.
In contrast to these various authors, Himma [2007a] has argued that computer technology does not need to raise new ethical issues or require new ethical theories to be a legitimate field of applied ethics. He argues that issues in computer ethics may not be unique and may be approached with traditional ethical theories. Nevertheless, computer ethics is a legitimate field because computer technology has given rise to an identifiable cluster of moral issues in much the same way like medical ethics and other fields of applied ethics.
Largely separately from the foundationalist debate, several authors have discussed the issue of proper methodology in computer ethics, discussing standard methods of applied ethics and their limitations for computer ethics [Van den Hoven, 2008; Brey, 2000]. An important recent development that has methodological and perhaps also metaethical ramifications is the increased focus on cultural issues. In intercultural information ethics [Ess and Hongladarom, 2007; Brey, 2007], ethicists attempt to compare and come to grips with the vastly different moral attitudes and behaviors that exist towards information and information technology in different cultures. In line with this development, Gorniak-Kocykowska [1995] and others have argued that the global character of cyberspace requires a global ethics which transcends cultural differences in value systems.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444516671500513
GOLD
M.A. McKibben , in Encyclopedia of Geology, 2005
Characteristics and Uses
Gold is a rare heavy metal that is soft, malleable, ductile, and bright sun yellow in colour when pure (Figure 1). The last property is reflected by its chemical symbol, Au, which comes from the Latin word aurum ('shining dawn'). Gold resists chemical attack and corrosion, has excellent electrical conductivity, and reflects infrared radiation.
Humans have made use of gold for more than 40 000 years. Because of its inertness and value, most of the gold that has ever been mined (about 130 000 metric tons) is still in use! Most gold is hoarded, in the form of bullion, coins, and jewellery. This usage stems from the metal's aesthetics, its role as a medium of exchange among banks and governments, and its perceived value by individuals and families as a hedge against economic uncertainty. Much like diamonds, gold has an emotional (and sometimes irrational) appeal that drives its free-market value to levels far above those that would otherwise be justified solely by its practical uses in technology.
Practical uses of gold take advantage of its chemical inertness, electrical conductivity, malleability, and ductility. Gold's high electrical conductivity and solderability make it excellent for creating reliable electrical contacts; it can be drawn into wires thinner than a human hair and pounded into sheets thin enough to pass light. It is therefore used in electronics (circuit boards, connectors, contacts, thermocouples, potentiometers), corrosion-resistant processing equipment (acid vats), infrared reflectors (windows of high-rise buildings, astronaut's helmet visors), and dental applications (fillings, crowns, etc.).
Pure gold is too soft, malleable, and ductile for uses that require physical endurance, particularly jewellery, so instead mixtures or alloys of gold with other metals (copper, nickel, silver, platinum, etc.) are used to enhance an object's durability. The gold content of such alloys can be defined in terms of carats, or parts of gold per 24 parts of total metal by weight. Durable jewellery is commonly made from 14 carat gold (containing 58.3% gold). Knowing the remaining metals used in gold alloys can be important to people who are allergic to specific metals in jewellery, such as nickel, particularly when the skin is pierced (earrings, nose rings, etc.).
The term 'gold-filled', which is used in jewellery making, means a layer of gold alloy that is placed over a less valuable core of base metal – in a few countries such layers are required by law to be at least 10 carats. 'Rolled gold plate' may be applied to layers that are less pure. Gold 'electroplate' jewellery must have at least seven millionths of an inch of gold overlaid, otherwise terms such as 'gold flashed' and 'gold washed' must be used.
Ironically, during the Spanish exploitation of the New World, comparatively 'worthless' platinum objects from South America were sometimes plated over with gold and passed off as pure. In modern times the relative values of these two metals are usually reversed, and the opposite strategy would be more lucrative. The consumer must be cautious at all times!
Another scale for expressing the purity of gold (mainly in bars, ingots, and coins) is fineness, based on parts of gold per 1000 parts of total metal. A gold bar that is '995 fine' thus contains 99.5% gold by weight.
The most common unit of weight for gold is the troy ounce, which is equal to 1.097 avoirdupois ounces or 31.10 g. A typical bar of gold (such as held by central banks and governments) weights about 400 troy ounces, or about 27.5 pounds (= ∼12.4 kg). A troy pound contains 12 troy ounces, and a troy ounce contains 20 troy pennyweights, the latter being the unit of weight most often used in jewellery. A troy pennyweight (dwt) equals 1.555 g.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0123693969002616
Polymers, Inorganic and Organometallic
Martel Zeldin , in Encyclopedia of Physical Science and Technology (Third Edition), 2003
III.D Other Main Group Element Polymers
III.D.1 Heteroatom–Sulfur Polymers
Catenated sulfur polymers were discussed earlier. The low cost and ready supply of sulfur has led to the development of a number of heteroatom polymers involving sulfur with other main group elements (e.g., N, C, O, P, B, etc.). Several examples with their applications are given in Table IV.
Polymer | General formula | Applications |
---|---|---|
Poly(thioethers) | Engineering plastics, elastomers, photoresists, optical filters, heat-resistant adhesives | |
Poly(thiazoles) | Conducting polymers, heat-dissipating coatings | |
Poly(sulfamides) | Coatings for imaging paper | |
Poly(thiophenes) | Antistatic coatings and films, biosensors, storage batteries | |
Poly(sulfones) | Engineering plastics |
A particularly interesting polymer of sulfur and nitrogen known as polythiazyl or poly(sulfur nitride) is obtained by the sequence of reactions illustrated in Fig. 31 . The polymer is crystalline and fibrous like asbestos. It has a gold-luster color when viewed perpendicular to the chains axis and behaves in many ways like a metal: high malleability, reflectivity, and electrical conductivity at room temperature. Moreover, it exhibits anisotropic superconductivity at 0.26 K. The conductivity has been explained in term of the polymer structure (Fig. 32) and bonding in which long SN chains assume a cis–trans–planar conformation with π-bond delocalization along the chains and from SS, SN, and NN orbital overlap between adjacent chains. Polythiazyl absorbs bromine to produce (SNBr x ) n (x = 0.25–0.40), a black, fibrous polymer with increased conductivity. Despite their unusual properties, polythiazyls have not been commercialized because of their oxidative and thermal instability in air and their tendency to detonate with heat and pressure.
III.D.2 Aluminum-Containing Polymers
Aluminum forms heteroatom oligomers and polymers mainly with oxygen and nitrogen. The AlO polymers are formed by (1) hydrolysis of multifunctional alanes (R n AlX3−n , n = 0, 1; R = organic; X = Cl, OR) to give poly(organoaloxane)s [Eq. (27)] or (2) the reaction of trialkyl- or trialkoxyalanes with organic acids to give poly(acyloxyaloxane)s (Fig. 33A, B):
(27)
In the latter reaction, an excess of carboxylic acid or chelating reagent such as acetylacetone (Fig. 33C), diol, or aminoalcohol produces a more soluble, higher MW, and processible material. The bulk of the evidence suggests that the products are highly crosslinked network structures. These preceramic polymers are used to prepare high-performance alumina (Al2O3) fibers. Generally, AlO polymers range from gums to brittle solids. In addition to ceramic fibers, they have applications as lubricants, fuel additives, catalysts, gelling and drying agents, water repellants, and additives to paints and varnishes. If oxygen in the backbone is replaced with NH or NR, the polymers are thermally unstable and sensitive to acids, bases, and water.
A variety of AlOSiO polymers, poly(aluminosiloxanes), with Si:Al ratios from 0.8 to 23, are known. These materials are prepared by the reaction of sodium oligo(organosiloxanate)s with AlCl3 [Eq. (28)]:
(28)
If a trifunctional silane is used, the Si:Al ratio is larger than 7 and a soluble polymer, believed to have a ladder structure, is obtained (Fig. 34).
The aluminosilicates are anionic polymers that are present in natural and synthetic minerals where Al(III) ions replace Si(IV) in the lattice. Examples of 2D sheet aluminosilicates are clays, micas, and talc. Feldspar and zeolites (molecular sieves) are 3D network structures in which AlO4 and SiO4 units share tetrahedral vertices. The cavities created in these network polymers are accommodated by cations such as Na+ and K+. Depending on the size of the cavity, these cations can be displaced by other cations, hence their use as ion exchange materials. Moreover, by synthetically controlling the size and shape of the cavity, small molecules like water, methanol, or gases can be selectively trapped. Thus these materials are used in laboratory and industrial separations and purification processes (Table V).
Structural type | Example | General formula | Applications |
---|---|---|---|
Sheet | Mica | K(Mg, Fe)3[(HO)2AlSi3O10] | Electronics (capacitors, diodes), cosmetics (powders) |
3D | Feldspar | K[(AlO2)(SiO2)3] | Glass, ceramics, semiprecious stones |
Zeolites | Na13Ca11Mg9KAl55Si137O384 | Water softener, gas separation and purification | |
Molecular sieves (A) | Na12[(AlO2)12(SiO2)12·xH2O] | Small-molecule (4 Å) absorber (e.g., H2O) | |
Molecular sieves (X) | Na86[(AlO2)86(SiO2)106·xH2O] | Medium-molecule (8 Å) absorber (e.g., CH3OH) |
III.D.3 Tin-Containing Polymers
Polystannanes as analogs to polysilanes were mentioned earlier. There are also a number of heteroatom polymers of tin in which tin is part of or pendant to the polymer backbone. Examples of the former are SnIIX2 (X = Cl, OCH3), which has an extended chain structure of three-coordinate Sn with two X units in the chain (Fig. 35A), and (CH3)3SnIVY (Y = F, azide), which is a strongly associated polymer of five-coordinate Sn with Y units forming bridges between tin moieties (Fig. 35B,C). (CH3)2SnIVF2 forms a 2D infinite-sheet structure in which each Sn is in an octahedral environment with the methyl groups lying above and below the plane of the sheet and the F atoms forming bridges to four Sn atoms. Other types of tin-containing polymers with SnOM (M = Si, Ti, and B) units in the chain have also been prepared. Some of these polymers have uses as plasticizers, fungicides for paints, fillers and reinforcing agents, and resin additives to enhance thermal and mechanical properties.
Vinyl esters such as tributyltin methacrylate can undergo polymerization to poly(tributyltin methacrylate) (Fig. 35D) using free radical, ionic, or coordination catalysts. These polymers have the Sn moiety pendant to the polymer backbone. Some of these polymers readily release the tin moiety in water and have potent antifouling, antifungal, and antibacterial properties. However, their toxicity to the environment has precluded widespread commercial applications. In an effort to circumvent the rapid hydrolytic release problem, styrene polymers with n-Bu3Sn bonded directly to the phenyl group have been prepared.
Unusual "cage" oligomeric organotin carboxylates that are prepared by the condensation of organostannoic acids with carboxylic acids or their salts are mainly tetramers and hexamers with formula [RSn(O)O2CR′]4 or 6. Evidence suggests that the reaction proceeds through the formation of ladder type intermediates (see Fig. 6) prior to closure to yield a stable drum-shaped product (Fig. 35E; only one of six carboxylate units is shown).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122274105009479
Source: https://www.sciencedirect.com/topics/physics-and-astronomy/malleability
0 Response to "Malleability is a Measure of How Easy It is to Bend and Shape a Metal"
Postar um comentário