view org/stat-mech.org @ 3:8f3b6dcb9add

Transcribed up to section 1.9, Entropy of an Ideal Boltzmann Gas
author Dylan Holmes <ocsenave@gmail.com>
date Sun, 29 Apr 2012 02:38:22 -0500
parents afbe1fe19b36
children 299a098a30da
line wrap: on
line source
1 #+TITLE: Statistical Mechanics
2 #+AUTHOR: E.T. Jaynes; edited by Dylan Holmes
3 #+EMAIL: rlm@mit.edu
4 #+KEYWORDS: statistical mechanics, thermostatics, thermodynamics, temperature, paradoxes, Jaynes
5 #+SETUPFILE: ../../aurellem/org/setup.org
6 #+INCLUDE: ../../aurellem/org/level-0.org
7 #+MATHJAX: align:"left" mathml:t path:"http://www.aurellem.org/MathJax/MathJax.js"
9 # "extensions/eqn-number.js"
11 #+begin_quote
12 *Note:* The following is a typeset version of
13 [[../sources/stat.mech.1.pdf][this unpublished book draft]], written by [[http://en.wikipedia.org/wiki/Edwin_Thompson_Jaynes][E.T. Jaynes]]. I have only made
14 minor changes, e.g. to correct typographical errors, add references, or format equations. The
15 content itself is intact. --- Dylan
16 #+end_quote
18 * Development of Thermodynamics
19 Our first intuitive, or \ldquo{}subjective\rdquo{} notions of temperature
20 arise from the sensations of warmth and cold associated with our
21 sense of touch . Yet science has been able to convert this qualitative
22 sensation into an accurately defined quantitative notion,
23 which can be applied far beyond the range of our direct experience.
24 Today an experimentalist will report confidently that his
25 spin system was at a temperature of 2.51 degrees Kelvin; and a
26 theoretician will report with almost as much confidence that the
27 temperature at the center of the sun is about \(2 \times 10^7\) degrees
28 Kelvin.
30 The /fact/ that this has proved possible, and the main technical
31 ideas involved, are assumed already known to the reader;
32 and we are not concerned here with repeating standard material
33 already available in a dozen other textbooks . However
34 thermodynamics, in spite of its great successes, firmly established
35 for over a century, has also produced a great deal of confusion
36 and a long list of \ldquo{}paradoxes\rdquo{} centering mostly
37 around the second law and the nature of irreversibility.
38 For this reason and others noted below, we want to dwell here at
39 some length on the /logic/ underlying the development of
40 thermodynamics . Our aim is to emphasize certain points which,
41 in the writer's opinion, are essential for clearing up the
42 confusion and resolving the paradoxes; but which are not
43 sufficiently ernphasized---and indeed in many cases are
44 totally ignored---in other textbooks.
46 This attention to logic
47 would not be particularly needed if we regarded classical
48 thermodynamics (or, as it is becoming called increasingly,
49 /thermostatics/) as a closed subject, in which the fundamentals
50 are already completely established, and there is
51 nothing more to be learned about them. A person who believes
52 this will probably prefer a pure axiomatic approach, in which
53 the basic laws are simply stated as arbitrary axioms, without
54 any attempt to present the evidence for them; and one proceeds
55 directly to working out their consequences.
56 However, we take the attitude here that thermostatics, for
57 all its venerable age, is very far from being a closed subject,
58 we still have a great deal to learn about such matters as the
59 most general definitions of equilibrium and reversibility, the
60 exact range of validity of various statements of the second and
61 third laws, the necessary and sufficient conditions for
62 applicability of thermodynamics to special cases such as
63 spin systems, and how thermodynamics can be applied to such
64 systems as putty or polyethylene, which deform under force,
65 but retain a \ldquo{}memory\rdquo{} of their past deformations.
66 Is it possible to apply thermodynamics to a system such as a vibrating quartz crystal? We can by
67 no means rule out the possibility that still more laws of
68 thermodynamics exist, as yet undiscovered, which would be
69 useful in such applications.
72 It is only by careful examination of the logic by which
73 present thermodynamics was created, asking exactly how much of
74 it is mathematical theorems, how much is deducible from the laws
75 of mechanics and electrodynamics, and how much rests only on
76 empirical evidence, how compelling is present evidence for the
77 accuracy and range of validity of its laws; in other words,
78 exactly where are the boundaries of present knowledge, that we
79 can hope to uncover new things. Clearly, much research is still
80 needed in this field, and we shall be able to accomplish only a
81 small part of this program in the present review.
84 It will develop that there is an astonishingly close analogy
85 with the logic underlying statistical theory in general, where
86 again a qualitative feeling that we all have (for the degrees of
87 plausibility of various unproved and undisproved assertions) must
88 be convertefi into a precisely defined quantitative concept
89 (probability). Our later development of probability theory in
90 Chapter 6,7 will be, to a considerable degree, a paraphrase
91 of our present review of the logic underlying classical
92 thermodynamics.
94 ** The Primitive Thermometer
96 The earliest stages of our
97 story are necessarily speculative, since they took place long
98 before the beginnings of recorded history. But we can hardly
99 doubt that primitive man learned quickly that objects exposed
100 to the sun‘s rays or placed near a fire felt different from
101 those in the shade away from fires; and the same difference was
102 noted between animal bodies and inanimate objects.
105 As soon as it was noted that changes in this feeling of
106 warmth were correlated with other observable changes in the
107 behavior of objects, such as the boiling and freezing of water,
108 cooking of meat, melting of fat and wax, etc., the notion of
109 warmth took its first step away from the purely subjective
110 toward an objective, physical notion capable of being studied
111 scientifically.
113 One of the most striking manifestations of warmth (but far
114 from the earliest discovered) is the almost universal expansion
115 of gases, liquids, and solids when heated . This property has
116 proved to be a convenient one with which to reduce the notion
117 of warmth to something entirely objective. The invention of the
118 /thermometer/, in which expansion of a mercury column, or a gas,
119 or the bending of a bimetallic strip, etc. is read off on a
120 suitable scale, thereby giving us a /number/ with which to work,
121 was a necessary prelude to even the crudest study of the physical
122 nature of heat. To the best of our knowledge, although the
123 necessary technology to do this had been available for at least
124 3,000 years, the first person to carry it out in practice was
125 Galileo, in 1592.
127 Later on we will give more precise definitions of the term
128 \ldquo{}thermometer.\rdquo{} But at the present stage we
129 are not in a position to do so (as Galileo was not), because
130 the very concepts needed have not yet been developed;
131 more precise definitions can be
132 given only after our study has revealed the need for them. In
133 deed, our final definition can be given only after the full
134 mathematical formalism of statistical mechanics is at hand.
136 Once a thermometer has been constructed, and the scale
137 marked off in a quite arbitrary way (although we will suppose
138 that the scale is at least monotonic: i.e., greater warmth always
139 corresponds to a greater number), we are ready to begin scien
140 tific experiments in thermodynamics. The number read eff from
141 any such instrument is called the /empirical temperature/, and we
142 denote it by \(t\). Since the exact calibration of the thermometer
143 is not specified), any monotonic increasing function
144 \(t‘ = f(t)\) provides an equally good temperature scale for the
145 present.
148 ** Thermodynamic Systems
150 The \ldquo{}thermodynamic systems\rdquo{} which
151 are the objects of our study may be, physically, almost any
152 collections of objects. The traditional simplest system with
153 which to begin a study of thermodynamics is a volume of gas.
154 We shall, however, be concerned from the start also with such
155 things as a stretched wire or membrane, an electric cell, a
156 polarized dielectric, a paramagnetic body in a magnetic field, etc.
158 The /thermodynamic state/ of such a system is determined by
159 specifying (i.e., measuring) certain macrcoscopic physical
160 properties. Now, any real physical system has many millions of such
161 preperties; in order to have a usable theory we cannot require
162 that /all/ of them be specified. We see, therefore, that there
163 must be a clear distinction between the notions of
164 \ldquo{}thermodynamic system\rdquo{} and \ldquo{}physical
165 system.\rdquo{}
166 A given /physical/ system may correspond to many different
167 /thermodynamic systems/, depending
168 on which variables we choose to measure or control; and which
169 we decide to leave unmeasured and/or uncontrolled.
172 For example, our physical system might consist of a crystal
173 of sodium chloride. For one set of experiments we work with
174 temperature, volume, and pressure; and ignore its electrical
175 properties. For another set of experiments we work with
176 temperature, electric field, and electric polarization; and
177 ignore the varying stress and strain. The /physical/ system,
178 therefore, corresponds to two entirely different /thermodynamic/
179 systems. Exactly how much freedom, then, do we have in choosing
180 the variables which shall define the thermodynamic state of our
181 system? How many must we choose? What [criteria] determine when
182 we have made an adequate choice? These questions cannot be
183 answered until we say a little more about what we are trying to
184 accomplish by a thermodynamic theory. A mere collection of
185 recorded data about our system, as in the [[http://en.wikipedia.org/wiki/CRC_Handbook_of_Chemistry_and_Physics][/Handbook of Physics and
186 Chemistry/]], is a very useful thing, but it hardly constitutes
187 a theory. In order to construct anything deserving of such a
188 name, the primary requirement is that we can recognize some kind
189 of reproducible connection between the different properties con
190 sidered, so that information about some of them will enable us
191 to predict others. And of course, in order that our theory can
192 be called thermodynamics (and not some other area of physics),
193 it is necessary that the temperature be one of the quantities
194 involved in a nontrivial way.
196 The gist of these remarks is that the notion of
197 \ldquo{}thermodynamic system\rdquo{} is in part
198 an anthropomorphic one; it is for us to
199 say which set of variables shall be used. If two different
200 choices both lead to useful reproducible connections, it is quite
201 meaningless to say that one choice is any more \ldquo{}correct\rdquo{}
202 than the other. Recognition of this fact will prove crucial later in
203 avoiding certain ancient paradoxes.
205 At this stage we can determine only empirically which other
206 physical properties need to be introduced before reproducible
207 connections appear. Once any such connection is established, we
208 can analyze it with the hope of being able to (1) reduce it to a
209 /logical/ connection rather than an empirical one; and (2) extend
210 it to an hypothesis applying beyond the original data, which
211 enables us to predict further connections capable of being
212 tested by experiment. Examples of this will be given presently.
215 There will remain, however, a few reproducible relations
216 which to the best of present knowledge, are not reducible to
217 logical relations within the context of classical thermodynamics
218 (and. whose demonstration in the wider context of mechanics,
219 electrodynamics, and quantum theory remains one of probability
220 rather than logical proof); from the standpoint of thermodynamics
221 these remain simply statements of empirical fact which must be
222 accepted as such without any deeper basis, but without which the
223 development of thermodynamics cannot proceed. Because of this
224 special status, these relations have become known as the
225 \ldquo{}laws\rdquo{}
226 of thermodynamics . The most fundamental one is a qualitative
227 rather than quantitative relation, the \ldquo{}zero'th law.\rdquo{}
229 ** Equilibrium; the Zeroth Law
231 It is a common experience
232 that when objects are placed in contact with each other but
233 isolated from their surroundings, they may undergo observable
234 changes for a time as a result; one body may become warmer,
235 another cooler, the pressure of a gas or volume of a liquid may
236 change; stress or magnetization in a solid may change, etc. But
237 after a sufficient time, the observable macroscopic properties
238 settle down to a steady condition, after which no further changes
239 are seen unless there is a new intervention from the outside.
240 When this steady condition is reached, the experimentalist says
241 that the objects have reached a state of /equilibrium/ with each
242 other. Once again, more precise definitions of this term will
243 be needed eventually, but they require concepts not yet developed.
244 In any event, the criterion just stated is almost the only one
245 used in actual laboratory practice to decide when equilibrium
246 has been reached.
249 A particular case of equilibrium is encountered when we
250 place a thermometer in contact with another body. The reading
251 \(t\) of the thermometer may vary at first, but eventually it reach es
252 a steady value. Now the number \(t\) read by a thermometer is always.
253 by definition, the empirical temperature /of the thermometer/ (more
254 precisely, of the sensitive element of the thermometer). When
255 this number is constant in time, we say that the thermometer is
256 in /thermal equilibrium/ with its surroundings; and we then extend
257 the notion of temperature, calling the steady value \(t\) also the
258 /temperature of the surroundings/.
260 We have repeated these elementary facts, well known to every
261 child, in order to emphasize this point: Thermodynamics can be
262 a theory /only/ of states of equilibrium, because the very
263 procedure by which the temperature of a system is defined by
264 operational means, already presupposes the attainment of
265 equilibrium. Strictly speaking, therefore, classical
266 thermodynamics does not even contain the concept of a
267 \ldquo{}time-varying temperature.\rdquo{}
269 Of course, to recognize this limitation on conventional
270 thermodynamics (best emphasized by calling it instead,
271 thermostatics) in no way rules out the possibility of
272 generalizing the notion of temperature to nonequilibrium states.
273 Indeed, it is clear that one could define any number of
274 time-dependent quantities all of which reduce, in the special
275 case of equilibrium, to the temperature as defined above.
276 Historically, attempts to do this even antedated the discovery
277 of the laws of thermodynamics, as is demonstrated by
278 \ldquo{}Newton's law of cooling.\rdquo{} Therefore, the
279 question is not whether generalization is /possible/, but only
280 whether it is in any way /useful/; i.e., does the temperature so
281 generalized have any connection with other physical properties
282 of our system, so that it could help us to predict other things?
283 However, to raise such questions takes us far beyond the
284 domain of thermostatics; and the general laws of nonequilibrium
285 behavior are so much more complicated that it would be virtually
286 hopeless to try to unravel them by empirical means alone. For
287 example, even if two different kinds of thermometer are calibrated
288 so that they agree with each other in equilibrium situations,
289 they will not agree in general about the momentary value a
290 \ldquo{}time-varying temperature.\rdquo{} To make any real
291 progress in this area, we have to supplement empirical observation by the guidance
292 of a rather hiqhly-developed theory. The notion of a
293 time-dependent temperature is far from simple conceptually, and we
294 will find that nothing very helpful can be said about this until
295 the full mathematical apparatus of nonequilibrium statistical
296 mechanics has been developed.
298 Suppose now that two bodies have the same temperature; i.e.,
299 a given thermometer reads the same steady value when in contact
300 with either. In order that the statement, \ldquo{}two bodies have the
301 same temperature\rdquo{} shall describe a physical property of the bodies,
302 and not merely an accidental circumstance due to our having used
303 a particular kind of thermometer, it is necessary that /all/
304 thermometers agree in assigning equal temperatures to them if
305 /any/ thermometer does . Only experiment is competent to determine
306 whether this universality property is true. Unfortunately, the
307 writer must confess that he is unable to cite any definite
308 experiment in which this point was subjected to a careful test.
309 That equality of temperatures has this absolute meaning, has
310 evidently been taken for granted so much that (like absolute
311 sirnultaneity in pre-relativity physics) most of us are not even
312 consciously aware that we make such an assumption in
313 thermodynamics. However, for the present we can only take it as a familiar
314 empirical fact that this condition does hold, not because we can
315 cite positive evidence for it, but because of the absence of
316 negative evidence against it; i.e., we think that, if an
317 exception had ever been found, this would have created a sensation in
318 physics, and we should have heard of it.
320 We now ask: when two bodies are at the same temperature,
321 are they then in thermal equilibrium with each other? Again,
322 only experiment is competent to answer this; the general
323 conclusion, again supported more by absence of negative evidence
324 than by specific positive evidence, is that the relation of
325 equilibrium has this property:
326 #+begin_quote
327 /Two bodies in thermal equilibrium
328 with a third body, are thermal equilibrium with each other./
329 #+end_quote
331 This empirical fact is usually called the \ldquo{}zero'th law of
332 thermodynamics.\rdquo{} Since nothing prevents us from regarding a
333 thermometer as the \ldquo{}third body\rdquo{} in the above statement,
334 it appears that we may also state the zero'th law as:
335 #+begin_quote
336 /Two bodies are in thermal equilibrium with each other when they are
337 at the same temperature./
338 #+end_quote
339 Although from the preceding discussion it might appear that
340 these two statements of the zero'th law are entirely equivalent
341 (and we certainly have no empirical evidence against either), it
342 is interesting to note that there are theoretical reasons, arising
343 from General Relativity, indicating that while the first
344 statement may be universally valid, the second is not. When we
345 consider equilibrium in a gravitational field, the verification
346 that two bodies have equal temperatures may require transport
347 of the thermometer through a gravitational potential difference;
348 and this introduces a new element into the discussion. We will
349 consider this in more detail in a later Chapter, and show that
350 according to General Relativity, equilibrium in a large system
351 requires, not that the temperature be uniform at all points, but
352 rather that a particular function of temperature and gravitational
353 potential be constant (the function is \(T\cdot \exp{(\Phi/c^2})\), where
354 \(T\) is the Kelvin temperature to be defined later, and \(\Phi\) is the
355 gravitational potential).
357 Of course, this effect is so small that ordinary terrestrial
358 experiments would need to have a precision many orders of
359 magnitude beyond that presently possible, before one could hope even
360 to detect it; and needless to say, it has played no role in the
361 development of thermodynamics. For present purposes, therefore,
362 we need not distinguish between the two above statements of the
363 zero'th law, and we take it as a basic empirical fact that a
364 uniform temperature at all points of a system is an essential
365 condition for equilibrium. It is an important part of our
366 ivestigation to determine whether there are other essential
367 conditions as well. In fact, as we will find, there are many
368 different kinds of equilibrium; and failure to distinguish between
369 them can be a prolific source of paradoxes.
371 ** Equation of State
372 Another important reproducible connection is found when
373 we consider a thermodynamic system defined by
374 three parameters; in addition to the temperature we choose a
375 \ldquo{}displacement\rdquo{} and a conjugate \ldquo{}force.\rdquo{}
376 Subject to some qualifications given below, we find experimentally
377 that these parameters are not independent, but are subject to a constraint.
378 For example, we cannot vary the equilibrium pressure, volume,
379 and temperature of a given mass of gas independently; it is found
380 that a given pressure and volume can be realized only at one
381 particular temperature, that the gas will assume a given tempera~
382 ture and volume only at one particular pressure, etc. Similarly,
383 a stretched wire can be made to have arbitrarily assigned tension
384 and elongation only if its temperature is suitably chosen, a
385 dielectric will assume a state of given temperature and
386 polarization at only one value of the electric field, etc.
387 These simplest nontrivial thermodynamic systems (three
388 parameters with one constraint) are said to possess two
389 /degrees of freedom/; for the range of possible equilibrium states is defined
390 by specifying any two of the variables arbitrarily, whereupon the
391 third, and all others we may introduce, are determined.
392 Mathematically, this is expressed by the existence of a functional
393 relationship of the form[fn:: The set of solutions to an equation
394 like /f(X,x,t)=/ const. is called a /level set/. Here, Jaynes is
395 saying that the quantities /X/, /x/, and /t/ follow a \ldquo{}functional
396 rule\rdquo{}, so the set of physically allowed combinations of /X/,
397 /x/, and /t/ in equilibrium states can be
398 expressed as the level set of a function.
400 But not every function expresses a constraint relation; for some
401 functions, you can specify two of the variables, and the third will
402 still be undetermined. (For example, if f=X^2+x^2+t^2-3,
403 the level set /f(X,x,t)=0/ is a sphere, and specifying /x=1/, /t=1/
404 leaves you with two potential possibilities for /X/ =\pm 1.)
406 A function like /f/ has to possess one more propery in order for its
407 level set to express a constraint relationship: it must be monotonic in
408 each of its variables /X/, /x/, and /t/.
409 #the partial derivatives of /f/ exist for every allowed combination of
410 #inputs /x/, /X/, and /t/.
411 In other words, the level set has to pass a sort of
412 \ldquo{}vertical line test\rdquo{} for each of its variables.]
414 #Edit Here, Jaynes
415 #is saying that it is possible to express the collection of allowed combinations \(\langle X,x,t \rangle\) of force, quantity, and temperature as a
416 #[[http://en.wikipedia.org/wiki/Level_set][level set]] of some function \(f\). However, not all level sets represent constraint relations; consider \(f(X,x,t)=X^2+x^2+t^2-1\)=0.
417 #In order to specify
419 \begin{equation}
420 f(X,x,t) = O
421 \end{equation}
423 where $X$ is a generalized force (pressure, tension, electric or
424 magnetic field, etc.), $x$ is the corresponding generalized
425 displacement (volume, elongation, electric or magnetic polarization,
426 etc.), and $t$ is the empirical temperature. Equation (1-1) is
427 called /the equation of state/.
429 At the risk of belaboring it, we emphasize once again that
430 all of this applies only for a system in equilibrium; for
431 otherwise not only.the temperature, but also some or all of the other
432 variables may not be definable. For example, no unique pressure
433 can be assigned to a gas which has just suffered a sudden change
434 in volume, until the generated sound waves have died out.
436 Independently of its functional form, the mere fact of the
437 /existence/ of an equation of state has certain experimental
438 consequences. For example, suppose that in experiments on oxygen
439 gas, in which we control the temperature and pressure
440 independently, we have found that the isothermal compressibility $K$
441 varies with temperature, and the thermal expansion coefficient
442 \alpha varies with pressure $P$, so that within the accuracy of the data,
444 \begin{equation}
445 \frac{\partial K}{\partial t} = - \frac{\partial \alpha}{\partial P}
446 \end{equation}
448 Is this a particular property of oxygen; or is there reason to
449 believe that it holds also for other substances? Does it depend
450 on our particular choice of a temperature scale?
452 In this case, the answer is found at once; for the definitions of $K$,
453 \alpha are
455 \begin{equation}
456 K = -\frac{1}{V}\frac{\partial V}{\partial P},\qquad
457 \alpha=\frac{1}{V}\frac{\partial V}{\partial t}
458 \end{equation}
460 which is simply a mathematical expression of the fact that the
461 volume $V$ is a definite function of $P$ and $t$; i.e., it depends
462 only
463 on their present values, and not how those values were attained.
464 In particular, $V$ does not depend on the direction in the \((P, t)\)
465 plane through which the present values were approached; or, as we
466 usually say it, \(dV\) is an /exact differential/.
468 Therefore, although at first glance the relation (1-2) appears
469 nontrivial and far from obvious, a trivial mathematical analysis
470 convinces us that it must hold regardless of our particular
471 temperature scale, and that it is true not only of oxygen; it must
472 hold for any substance, or mixture of substances, which possesses a
473 definite, reproducible equation of state \(f(P,V,t)=0\).
475 But this understanding also enables us to predict situations in which
476 (1-2) will /not/ hold. Equation (1-2), as we have just learned, expresses
477 the fact that an equation of state exists involving only the three
478 variables \((P,V,t)\). Now suppose we try to apply it to a liquid such
479 as nitrobenzene. The nitrobenzene molecule has a large electric dipole
480 moment; and so application of an electric field (as in the
481 [[http://en.wikipedia.org/wiki/Kerr_effect][electro-optical Kerr cell]]) causes an alignment of molecules which, as
482 accurate measurements will verify, changes the pressure at a given
483 temperature and volume. Therefore, there can no longer exist any
484 unique equation of state involving \((P, V, t)\) only; with
485 sufficiently accurate measurements, nitrobenzene must be regarded as a
486 thermodynamic system with at least three degrees of freedom, and the
487 general equation of state must have at least a complicated a form as
488 \(f(P,V,t,E) = 0\).
490 But if we introduce a varying electric field $E$ into the discussion,
491 the resulting varying electric polarization $M$ also becomes a new
492 thermodynamic variable capable of being measured. Experimentally, it
493 is easiest to control temperature, pressure, and electric field
494 independently, and of course we find that both the volume and
495 polarization are then determined; i.e., there must exist functional
496 relations of the form \(V = V(P,t,E)\), \(M = M(P,t,E)\), or in more
497 symmetrical form
499 \begin{equation}
500 f(V,P,t,E) = 0 \qquad g(M,P,t,E)=0.
501 \end{equation}
503 In other words, if we regard nitrobenzene as a thermodynamic system of
504 three degrees of freedom (i.e., having specified three parameters
505 arbitrarily, all others are then determined), it must possess two
506 independent equations of state.
508 Similarly, a thermodynamic system with four degrees of freedom,
509 defined by the termperature and three pairs of conjugate forces and
510 displacements, will have three independent equations of state, etc.
512 Now, returning to our original question, if nitrobenzene possesses
513 this extra electrical degree of freedom, under what circumstances do
514 we exprect to find a reproducible equation of state involving
515 \((p,V,t)\) only? Evidently, if $E$ is held constant, then the first
516 of equations (1-5) becomes such an equation of state, involving $E$ as
517 a fixed parameter; we would find many different equations of state of
518 the form \(f(P,V,t) = 0\) with a different function $f$ for each
519 different value of the electric field. Likewise, if \(M\) is held
520 constant, we can eliminate \(E\) between equations (1-5) and find a
521 relation \(h(P,V,t,M)=0\), which is an equation of state for
522 \((P,V,t)\) containing \(M\) as a fixed parameter.
524 More generally, if an electrical constraint is imposed on the system
525 (for example, by connecting an external charged capacitor to the
526 electrodes) so that \(M\) is determined by \(E\); i.e., there is a
527 functional relation of the form
529 \begin{equation}
530 g(M,E) = \text{const.}
531 \end{equation}
533 then (1-5) and (1-6) constitute three simultaneous equations, from
534 which both \(E\) and \(M\) may be eliminated mathematically, leading
535 to a relation of the form \(h(P,V,t;q)=0\), which is an equation of
536 state for \((P,V,t)\) involving the fixed parameter \(q\).
538 We see, then, that as long as a fixed constraint of the form (1-6) is
539 imposed on the electrical degree of freedom, we can still observe a
540 reproducible equation of state for nitrobenzene, considered as a
541 thermodynamic system of only two degrees of freedom. If, however, this
542 electrical constraint is removed, so that as we vary $P$ and $t$, the
543 values of $E$ and $M$ vary in an uncontrolled way over a
544 /two-dimensional/ region of the \((E, M)\) plane, then we will find no
545 definite equation of state involving only \((P,V,t)\).
547 This may be stated more colloqually as follows: even though a system
548 has three degrees of freedom, we can still consider only the variables
549 belonging to two of them, and we will find a definite equation of
550 state, /provided/ that in the course of the experiments, the unused
551 degree of freedom is not \ldquo{}tampered with\rdquo{} in an
552 uncontrolled way.
554 We have already emphasized that any physical system corresponds to
555 many different thermodynamic systems, depending on which variables we
556 choose to control and measure. In fact, it is easy to see that any
557 physical system has, for all practical purposes, an /arbitrarily
558 large/ number of degrees of freedom. In the case of nitrobenzene, for
559 example, we may impose any variety of nonuniform electric fields on
560 our sample. Suppose we place $(n+1)$ different electrodes, labelled
561 \(\{e_0,e_1, e_2 \ldots e_n\}\) in contact with the liquid in various
562 positions. Regarding \(e_0\) as the \ldquo{}ground\rdquo{}, maintained
563 at zero potential, we can then impose $n$ different potentials
564 \(\{v_1, \ldots, v_n\}\) on the other electrodes independently, and we
565 can also measure the $n$ different conjugate displacements, as the
566 charges \(\{q_1,\ldots, q_n\}\) accumulated on electrodes
567 \(\{e_1,\ldots e_n\}\). Together with the pressure (understood as the
568 pressure measured at one given position), volume, and temperature, our
569 sample of nitrobenzene is now a thermodynamic system of $(n+1)$
570 degrees of freedom. This number may be as large as we please, limited
571 only by our patience in constructing the apparatus needed to control
572 or measure all these quantities.
574 We leave it as an exercise for the reader (Problem 1) to find the most
575 general condition on the variables \(\{v_1, q_1, v_2, q_2, \ldots
576 v_n,q_n\}\) which will ensure that a definite equation of state
577 $f(P,V,t)=0$ is observed in spite of all these new degrees of
578 freedom. The simplest special case of this relation is, evidently, to
579 ground all electrodes, thereby inposing the conditions $v_1 = v_2 =
580 \ldots = v_n = 0$. Equally well (if we regard nitrobenzene as having
581 negligible electrical conductivity) we may open-circuit all
582 electrodes, thereby imposing the conditions \(q_i = \text{const.}\) In
583 the latter case, in addition to an equation of state of the form
584 \(f(P,V,t)=0\), which contains these constants as fixed parameters,
585 there are \(n\) additional equations of state of the form $v_i =
586 v_i(P,t)$. But if we choose to ignore these voltages, there will be no
587 contradiction in considering our nitrobenzene to be a thermodynamic
588 system of two degrees of freedom, involving only the variables
589 \(P,V,t\).
591 Similarly, if our system of interest is a crystal, we may impose on it
592 a wide variety of nonuniform stress fields; each component of the
593 stress tensor $T_{ij}$ may bary with position. We might expand each of
594 these functions in a complete orthonormal set of functions
595 \(\phi_k(x,y,z)\):
597 \begin{equation}
598 T_{ij}(x,y,z) = \sum_k a_{ijk} \phi_k(x,y,z)
599 \end{equation}
601 and with a sufficiently complicated system of levers which in various
602 ways squeeze and twist the crystal, we might vary each of the first
603 1,000 expansion coefficients $a_{ijk}$ independently, and measure the
604 conjugate displacements $q_{ijk}$. Our crystal is then a thermodynamic
605 system of over 1,000 degrees of freedom.
607 The notion of \ldquo{}numbers of degrees of freedom\rdquo{} is
608 therefore not a /physical property/ of any system; it is entirely
609 anthropomorphic, since any physical system may be regarded as a
610 thermodynamic system with any number of degrees of freedom we please.
612 If new thermodynamic variables are always introduced in pairs,
613 consisting of a \ldquo{}force\rdquo{} and conjugate
614 \ldquo{}displacement\rdquo{}, then a thermodynamic system of $n$
615 degrees of freedom must possess $(n-1)$ independent equations of
616 state, so that specifying $n$ quantities suffices to determine all
617 others.
619 This raises an interesting question; whether the scheme of classifying
620 thermodynamic variables in conjugate pairs is the most general
621 one. Why, for example, is it not natural to introduce three related
622 variables at a time? To the best of the writer's knowledge, this is an
623 open question; there seems to be no fundamental reason why variables
624 /must/ always be introduced in conjugate pairs, but there seems to be
625 no known case in which a different scheme suggests itself as more
626 appropriate.
628 ** Heat
629 We are now in a position to consider the results and interpretation of
630 a number of elementary experiments involving
631 thermal interaction, which can be carried out as soon as a primitive
632 thermometer is at hand. In fact these experiments, which we summarize
633 so quickly, required a very long time for their first performance, and
634 the essential conclusions of this Section were first arrived at only
635 about 1760---more than 160 years after Galileo's invention of the
636 thermometer---by Joseph Black, who was Professor of Chemistry at
637 Glasgow University. Black's analysis of calorimetric experiments
638 initiated by G. D. Fahrenheit before 1736 led to the first recognition
639 of the distinction between temperature and heat, and prepared the way
640 for the work of his better-known pupil, James Watt.
642 We first observe that if two bodies at different temperatures are
643 separated by walls of various materials, they sometimes maintain their
644 temperature difference for a long time, and sometimes reach thermal
645 equilibrium very quickly. The differences in behavior observed must be
646 ascribed to the different properties of the separating walls, since
647 nothing else is changed. Materials such as wood, asbestos, porous
648 ceramics (and most of all, modern porous plastics like styrofoam), are
649 able to sustain a temperature difference for a long time; a wall of an
650 imaginary material with this property idealized to the point where a
651 temperature difference is maintained indefinitely is called an
652 /adiabatic wall/. A very close approximation to a perfect adiabatic
653 wall is realized by the Dewar flask (thermos bottle), of which the
654 walls consist of two layers of glass separated by a vacuum, with the
655 surfaces silvered like a mirror. In such a container, as we all know,
656 liquids may be maintained hot or cold for days.
658 On the other hand, a thin wall of copper or silver is hardly able to
659 sustain any temperature difference at all; two bodies separated by
660 such a partition come to thermal equilibrium very quickly. Such a wall
661 is called /diathermic/. It is found in general that the best
662 diathermic materials are the metals and good electrical conductors,
663 while electrical insulators make fairly good adiabatic walls. There
664 are good theoretical reasons for this rule; a particular case of it is
665 given by the [[http://en.wikipedia.org/wiki/Wiedemann_franz_law][Wiedemann-Franz law]] of solid-state theory.
667 Since a body surrounded by an adiabatic wall is able to maintain its
668 temperature independently of the temperature of its surroundings, an
669 adiabatic wall provides a means of thermally /isolating/ a system from
670 the rest of the universe; it is to be expected, therefore, that the
671 laws of thermal interaction between two systems will assume the
672 simplest form if they are enclosed in a common adiabatic container,
673 and that the best way of carrying out experiments on thermal
674 peroperties of substances is to so enclose them. Such an apparatus, in
675 which systems are made to interact inside an adiabatic container
676 supplied with a thermometer, is called a /calorimeter/.
678 Let us imagine that we have a calorimeter in which there is initially
679 a volume $V_W$ of water at a temperature $t_1$, and suspended above it
680 a volume $V_I$ of some other substance (say, iron) at temperature
681 $t_2$. When we drop the iron into the water, they interact thermally
682 (and the exact nature of this interaction is one of the things we hope
683 to learn now), the temperature of both changing until they are in
684 thermal equilibrium at a final temperature $t_0$.
686 Now we repeat the experiment with different initial temperatures
687 $t_1^\prime$ and $t_2^\prime$, so that a new equilibrium is reached at
688 temperature $t_0^\prime$. It is found that, if the temperature
689 differences are sufficiently small (and in practice this is not a
690 serious limitation if we use a mercury thermometer calibrated with
691 uniformly spaced degree marks on a capillary of uniform bore), then
692 whatever the values of $t_1^\prime$, $t_2^\prime$, $t_1$, $t_2$, the
693 final temperatures $t_0^\prime$, $t_0$ will adjust themselves so that
694 the following relation holds:
696 \begin{equation}
697 \frac{t_2 - t_0}{t_0 - t_1} = \frac{t_2^\prime -
698 t_0^\prime}{t_0^\prime - t_1^\prime}
699 \end{equation}
701 in other words, the /ratio/ of the temperature changes of the iron and
702 water is independent of the initial temperatures used.
704 We now vary the amounts of iron and water used in the calorimeter. It
705 is found that the ratio (1-8), although always independent of the
706 starting temperatures, does depend on the relative amounts of iron and
707 water. It is, in fact, proportional to the mass $M_W$ of water and
708 inversely proportional to the mass $M_I$ of iron, so that
710 \begin{equation}
711 \frac{t_2-t_0}{t_0-t_1} = \frac{M_W}{K_I M_I}
712 \end{equation}
714 where $K_I$ is a constant.
716 We next repeat the above experiments using a different material in
717 place of the iron (say, copper). We find again a relation
719 \begin{equation}
720 \frac{t_2-t_0}{t_0-t_1} = \frac{M_W}{K_C \cdot M_C}
721 \end{equation}
723 where $M_C$ is the mass of copper; but the constant $K_C$ is different
724 from the previous $K_I$. In fact, we see that the constant $K_I$ is a
725 new physical property of the substance iron, while $K_C$ is a physical
726 property of copper. The number $K$ is called the /specific heat/ of a
727 substance, and it is seen that according to this definition, the
728 specific heat of water is unity.
730 We now have enough experimental facts to begin speculating about their
731 interpretation, as was first done in the 18th century. First, note
732 that equation (1-9) can be put into a neater form that is symmetrical
733 between the two substances. We write $\Delta t_I = t_0 - t_2$, $\Delta
734 t_W = t_0 - t_1$ for the temperature changes of iron and water
735 respectively, and define $K_W \equiv 1$ for water. Equation (1-9) then
736 becomes
738 \begin{equation}
739 K_W M_W \Delta t_W + K_I M_I \Delta t_I = 0
740 \end{equation}
742 The form of this equation suggests a new experiment; we go back into
743 the laboratory, and find $n$ substances for which the specific heats
744 \(\{K_1,\ldots K_n\}\) have been measured previously. Taking masses
745 \(\{M_1, \ldots, M_n\}\) of these substances, we heat them to $n$
746 different temperatures \(\{t_1,\ldots, t_n\}\) and throw them all into
747 the calorimeter at once. After they have all come to thermal
748 equilibrium at temperature $t_0$, we find the differences $\Delta t_j
749 = t_0 - t_j$. Just as we suspected, it turns out that regardless of
750 the $K$'s, $M$'s, and $t$'s chosen, the relation
751 \begin{equation}
752 \sum_{j=0}^n K_j M_j \Delta t_j = 0
753 \end{equation}
754 is always satisfied. This sort of process is an old story in
755 scientific investigations; although the great theoretician Boltzmann
756 is said to have remarked: \ldquo{}Elegance is for tailors\rdquo{}, it
757 remains true that the attempt to reduce equations to the most
758 symmetrical form has often suggested important generalizations of
759 physical laws, and is a great aid to memory. Witness Maxwell's
760 \ldquo{}displacement current\rdquo{}, which was needed to fill in a
761 gap and restore the symmetry of the electromagnetic equations; as soon
762 as it was put in, the equations predicted the existence of
763 electromagnetic waves. In the present case, the search for a rather
764 rudimentary form of \ldquo{}elegance\rdquo{} has also been fruitful,
765 for we recognize that (1-12) has the standard form of a /conservation
766 law/; it defines a new quantity which is conserved in thermal
767 interactions of the type just studied.
769 The similarity of (1-12) to conservation laws in general may be seen
770 as follows. Let $A$ be some quantity that is conserved; the \(i\)th
771 system has an amount of it $A_i$. Now when the systems interact such
772 that some $A$ is transferred between them, the amount of $A$ in the
773 \(i\)th system is changed by a net amount \(\Delta A_i = (A_i)_{final} -
774 (A_i)_{initial}\); and the fact that there is no net change in the
775 total amount of $A$ is expressed by the equation \(\sum_i \Delta
776 A_i = 0\). Thus, the law of conservation of matter in a chemical
777 reaction is expressed by \(\sum_i \Delta M_i = 0\), where $M_i$ is the
778 mass of the \(i\)th chemical component.
780 What is this new conserved quantity? Mathematically, it can be defined
781 as $Q_i = K_i\cdot M_i \cdot t_i$; whereupon (1-12) becomes
783 \begin{equation}
784 \sum_i \Delta Q_i = 0
785 \end{equation}
787 and at this point we can correct a slight quantitative inaccuracy. As
788 noted, the above relations hold accurately only when the temperature
789 differences are sufficiently small; i.e., they are really only
790 differential laws. On sufficiently accurate measurements one find that
791 the specific heats $K_i$ depend on temperature; if we then adopt the
792 integral definition of $\Delta Q_i$,
793 \begin{equation}
794 \Delta Q_i = \int_{t_{i}}^{t_0} K_i(t) M_i dt
795 \end{equation}
797 the conservation law (1-13) will be found to hold in calorimetric
798 experiments with liquids and solids, to any accuracy now feasible. And
799 of course, from the manner in which the $K_i(t)$ are defined, this
800 relation will hold however our thermometers are calibrated.
802 Evidently, the stage is now set for a \ldquo{}new\rdquo{} physical
803 theory to account for these facts. In the 17th century, both Francis
804 Bacon and Isaac Newton had expressed their opinions that heat was a
805 form of motion; but they had no supporting factual evidence. By the
806 latter part of the 18th century, one had definite factual evidence
807 which seemed to make this view untenable; by the calorimetric
808 \ldquo{}mixing\rdquo{} experiments just described, Joseph Black had
809 recognized the distinction between temperature $t$ as a measure of
810 \ldquo{}hotness\rdquo{}, and heat $Q$ as a measure of /quantity/ of
811 something, and introduced the notion of heat capacity. He also
812 recognized the latent heats of freezing and vaporization. To account
813 for the conservation laws thus discovered, the theory then suggested
814 itself, naturally and almost inevitably, that heat was /fluid/,
815 indestructable and uncreatable, which had no appreciable weight and
816 was attracted differently by different kinds of matter. In 1787,
817 Lavoisier invented the name \ldquo{}caloric\rdquo{} for this fluid.
819 Looking down today from our position of superior knowledge (i.e.,
820 hindsight) we perhaps need to be reminded that the caloric theory was
821 a perfectly respectable scientific theory, fully deserving of serious
822 consideration; for it accounted quantitatively for a large body of
823 experimental fact, and made new predictions capable of being tested by
824 experiment.
826 One of these predictions was the possibility of accounting for the
827 thermal expansion of bodies when heated; perhaps the increase in
828 volume was just a measure of the volume of caloric fluid
829 absorbed. This view met with some disappointment as a result of
830 experiments which showed that different materials, on absorbing the
831 same quantity of heat, expanded by different amounts. Of course, this
832 in itself was not enough to overthrow the caloric theory, because one
833 could suppose that the caloric fluid was compressible, and was held
834 under different pressure in different media.
836 Another difficulty that seemed increasingly serious by the end of the
837 18th century was the failure of all attempts to weigh this fluid. Many
838 careful experiments were carried out, by Boyle, Fordyce, Rumford and
839 others (and continued by Landolt almost into the 20th century), with
840 balances capable of detecting a change of weight of one part in a
841 million; and no change could be detected on the melting of ice,
842 heating of substances, or carrying out of chemical reactions. But even
843 this is not really a conclusive argument against the caloric theory,
844 since there is no /a priori/ reason why the fluid should be dense
845 enough to weigh with balances (of course, we know today from
846 Einstein's $E=mc^2$ that small changes in weight should indeed exist
847 in these experiments; but to measure them would require balances about
848 10^7 times more sensitive than were available).
850 Since the caloric theory derives entirely from the empirical
851 conservation law (1-33), it can be refuted conclusively only by
852 exhibiting new experimental facts revealing situations in which (1-13)
853 is /not/ valid. The first such case was [[http://www.chemteam.info/Chem-History/Rumford-1798.html][found by Count Rumford (1798)]],
854 who was in charge of boring cannon in the Munich arsenal, and noted
855 that the cannon and chips became hot as a result of the cutting. He
856 found that heat could be produced indefinitely, as long as the boring
857 was continued, without any compensating cooling of any other part of
858 the system. Here, then, was a clear case in which caloric was /not/
859 conserved, as in (1-13); but could be created at will. Rumford wrote
860 that he could not conceive of anything that could be produced
861 indefinitely by the expenditure of work, \ldquo{}except it be /motion/\rdquo{}.
863 But even this was not enough to cause abandonment of the caloric
864 theory; for while Rumford's observations accomplished the negative
865 purpose of showing that the conservation law (1-13) is not universally
866 valid, they failed to accomplish the positive one of showing what
867 specific law should replace it (although he produced a good hint, not
868 sufficiently appreciated at the time, in his crude measurements of the
869 rate of heat production due to the work of one horse). Within the
870 range of the original calorimetric experiments, (1-13) was still
871 valid, and a theory successful in a restricted domain is better than
872 no theory at all; so Rumford's work had very little impact on the
873 actual development of thermodynamics.
875 (This situation is a recurrent one in science, and today physics offers
876 another good example. It is recognized by all that our present quantum
877 field theory is unsatisfactory on logical, conceptual, and
878 mathematical grounds; yet it also contains some important truth, and
879 no responsible person has suggested that it be abandoned. Once again,
880 a semi-satisfactory theory is better than none at all, and we will
881 continue to teach it and to use it until we have something better to
882 put in its place.)
884 # what is "the specific heat of a gas at constant pressure/volume"?
885 # changed t for temperature below from capital T to lowercase t.
886 Another failure of the conservation law (1-13) was [[http://web.lemoyne.edu/~giunta/mayer.html][noted in 1842]] by
887 R. Mayer, a German physician, who pointed out that the data already
888 available showed that the specific heat of a gas at constant pressure,
889 C_p, was greater than at constant volume $C_v$. He surmised that the
890 difference was due to the work done in expansion of the gas against
891 atmospheric pressure, when measuring $C_p$. Supposing that the
892 difference $\Delta Q = (C_p - C_v)\Delta t$ calories, in the heat
893 required to raise the temperature by $\Delta t$ was actually a
894 measure of amount of energy, he could estimate from the amount
895 $P\Delta V$ ergs of work done the amount of mechanical energy (number
896 of ergs) corresponding to a calorie of heat; but again his work had
897 very little impact on the development of thermodynamics, because he
898 merely offered this notion as an interpretation of the data without
899 performing or suggesting any new experiments to check his hypothesis
900 further.
902 Up to the point, then, one has the experimental fact that a
903 conservation law (1-13) exists whenever purely thermal interactions
904 were involved; but in processes involving mechanical work, the
905 conservation law broke down.
907 ** The First Law
908 Corresponding to the partially valid law of \ldquo{}conservation of
909 heat\rdquo{}, there had long been known another partially valid
910 conservation law in mechanics. The principle of conservation of
911 mechanical energy had been given by Leibnitz in 1693 in noting that,
912 according to the laws of Newtonian mechanics, one could define
913 potential and kinetic energy so that in mechanical processes they were
914 interconverted into each other, the total energy remaining
915 constant. But this too was not universally valid---the mechanical
916 energy was conserved only in the absence of frictional forces. In
917 processes involving friction, the mechanical energy seemed to
918 disappear.
920 So we had a law of conservation of heat, which broke down whenever
921 mechanical work was done; and a law of conservation of mechanical
922 energy, which broke down when frictional forces were present. If, as
923 Mayer had suggested, heat was itself a form of energy, then one had
924 the possibility of accounting for both of these failures in a new law
925 of conservation of /total/ (mechanical + heat) energy. On one hand,
926 the difference $C_p-C_v$ of heat capacities of gases would be
927 accounted for by the mechanical work done in expansion; on the other
928 hand, the disappearance of mechanical energy would be accounted for by
929 the heat produced by friction.
931 But to establish this requires more than just suggesting the idea and
932 illustrating its application in one or two cases --- if this is really
933 a new conservation law adequate to replace the two old ones, it must
934 be shown to be valid for /all/ substances and /all/ kinds of
935 interaction. For example, if one calorie of heat corresponded to $E$
936 ergs of mechanical energy in the gas experiments, but to a different
937 amoun $E^\prime$ in heat produced by friction, then there would be no
938 universal conservation law. This \ldquo{}first law\rdquo{} of
939 thermodynamics must therefore take the form:
940 #+begin_quote
941 There exists a /universal/ mechanical equivalent of heat, so that the
942 total (mechanical energy) + (heat energy) remeains constant in all
943 physical processes.
944 #+end_quote
946 It was James Prescott Joule who provided the [[http://www.chemteam.info/Chem-History/Joule-Heat-1845.html][first experimental data]]
947 indicating this universality, and providing the first accurate
948 numerical value of this mechanical equivalent. The calorie had been
949 defined as the amount of heat required to raise the temperature of one
950 gram of water by one degree Centigrade (more precisely, to raise it
951 from 14.5 to 15.5$^\circ C$). Joule measured the heating of a number
952 of different liquids due to mechanical stirring and electrical
953 heating, and established that, within the experimental accuracy (about
954 one percent) a /calorie/ of heat always corresponded to the same
955 amount of energy. Modern measurements give this numerical value as: 1
956 calorie = 4.184 \times 10^7 ergs = 4.184 joules.
957 # capitalize Joules? I think the convention is to spell them out in lowercase.
959 The circumstances of this important work are worth noting. Joule was
960 in frail health as a child, and was educated by private tutors,
961 including the chemist, John Dalton, who had formulated the atomic
962 hypothesis in the early nineteenth century. In 1839, when Joule was
963 nineteen, his father (a wealthy brewer) built a private laboratory for
964 him in Manchester, England; and the good use he made of it is shown by
965 the fact that, within a few months of the opening of this laboratory
966 (1840), he had completed his first important piece of work, at the
967 age of twenty. This was his establishment of the law of \ldquo{}Joule
968 heating,\rdquo{} $P=I^2 R$, due to the electric current in a
969 resistor. He then used this effect to determine the universality and
970 numerical value of the mechanical equivalent of heat, reported
971 in 1843. His mechanical stirring experiments reported in 1849 yielded
972 the value 1 calorie = 4.154 \times 10^7 ergs, amount 0.7% too low;
973 this determination was not improved upon for several decades.
975 The first law of thermodynamics may then be stated mathematically as
976 follows:
978 #+begin_quote
979 There exists a state function (i.e., a definite function of the
980 thermodynamic state) $U$, representing the total energy of any system,
981 such that in any process in which we change from one equilibrium to
982 another, the net change in $U$ is given by the difference of the heat
983 $Q$ supplied to the system, and the mechanical work $W$ done by the
984 system.
985 #+end_quote
986 On an infinitesimal change of state, this becomes
988 \begin{equation}
989 dU = dQ - dW.
990 \end{equation}
992 For a system of two degrees of freedom, defined by pressure $P$,
993 volume $V$, and temperature $t$, we have $dW = PdV$. Then if we regard
994 $U$ as a function $U(V,t)$ of volume and temperature, the fact that
995 $U$ is a state function means that $dU$ must be an exact differential;
996 i.e., the integral
998 \begin{equation}
999 \int_1^2 dU = U(V_2,t_2) - U(V_1,t_1)
1000 \end{equation}
1001 between any two thermodynamic states must be independent of the
1002 path. Equivalently, the integral $\oint dU$ over any closed cyclic
1003 path (for example, integrate from state 1 to state 2 along path A,
1004 then back to state 1 by a different path B) must be zero. From (1-15),
1005 this gives for any cyclic integral,
1007 \begin{equation}
1008 \oint dQ = \oint P dV
1009 \end{equation}
1011 another form of the first law, which states that in any process in
1012 which the system ends in the same thermodynamic state as the initial
1013 one, the total heat absorbed by the system must be equal to the total
1014 work done.
1016 Although the equations (1-15)-(1-17) are rather trivial
1017 mathematically, it is important to avoid later conclusions that we
1018 understand their exact meaning. In the first place, we have to
1019 understand that we are now measuring heat energy and mechanical energy
1020 in the same units; i.e. if we measured $Q$ in calories and $W$ in
1021 ergs, then (1-15) would of course not be correct. It does
1022 not matter whether we apply Joule's mechanical equivalent of heat
1023 to express $Q$ in ergs, or whether we apply it in the opposite way
1024 to express $U$ and $W$ in calories; each procedure will be useful in
1025 various problems. We can develop the general equations of
1026 thermodynamics
1027 without committing ourselves to any particular units,
1028 but of course all terms in a given equation must be expressed
1029 in the same units.
1031 Secondly, we have already stressed that the theory being
1032 developed must, strictly speaking, be a theory only of
1033 equilibrium states, since otherwise we have no operational definition
1034 of temperature . When we integrate over any \ldquo{}path\rdquo{} in the $(V-t)$
1035 plane, therefore, it must be understood that the path of
1036 integration is, strictly speaking, just a /locus of equilibrium
1037 states/; nonequilibrium states cannot be represented by points
1038 in the $(V-t)$ plane.
1040 But then, what is the relation between path of equilibrium
1041 states appearing in our equations, and the sequence of conditions
1042 produced experimentally when we change the state of a system in
1043 the laboratory? With any change of state (heating, compression,
1044 etc.) proceeding at a finite rate we do not have equilibrium in
1045 termediate states; and so there is no corresponding \ldquo{}path\rdquo{} in
1046 the $(V-t)$ plane ; only the initial and final equilibrium states
1047 correspond to definite points. But if we carry out the change
1048 of state more and more slowly, the physical states produced are
1049 nearer and nearer to equilibrium state. Therefore, we interpret
1050 a path of integration in the $(V-t)$ plane, not as representing
1051 the intermediate states of any real experiment carried out at
1052 a finite rate, but as the /limit/ of this sequence of states, in
1053 the limit where the change of state takes place arbitrarily
1054 slowly.
1056 An arbitrarily slow process, so that we remain arbitrarily
1057 near to equilibrium at all times, has another important property.
1058 If heat is flowing at an arbitrarily small rate, the temperature
1059 difference producing it must be arbitrarily small, and therefore
1060 an arbitrarily small temperature change would be able to reverse
1061 the direction of heat flow. If the Volume is changing very
1062 slowly, the pressure difference responsible for it must be very
1063 small; so a small change in pressure would be able to reverse
1064 the direction of motion. In other words, a process carried out
1065 arbitrarily slowly is /reversible/; if a system is arbitrarily
1066 close to equilibrium, then an arbitrarily small change in its
1067 environment can reverse the direction of the process.
1068 Recognizing this, we can then say that the paths of integra
1069 tion in our equations are to be interpreted physically as
1070 /reversible paths/ . In practice, some systems (such as gases)
1071 come to equilibrium so rapidly that rather fast changes of
1072 state (on the time scale of our own perceptions) may be quite
1073 good approximations to reversible changes; thus the change of
1074 state of water vapor in a steam engine may be considered
1075 reversible to a useful engineering approximation.
1078 ** Intensive and Extensive Parameters
1080 The literature of thermodynamics has long recognized a distinction between two
1081 kinds of quantities that may be used to define the thermodynamic
1082 state. If we imagine a given system as composed of smaller
1083 subsystems, we usually find that some of the thermodynamic variables
1084 have the same values in each subsystem, while others are additive,
1085 the total amount being the sum of the values of each subsystem.
1086 These are called /intensive/ and /extensive/ variables, respectively.
1087 According to this definition, evidently, the mass of a system is
1088 always an extensive quantity, and at equilibrium the temperature
1089 is an intensive ‘quantity. Likewise, the energy will be extensive
1090 provided that the interaction energy between the subsystems can
1091 be neglected.
1093 It is important to note, however, that in general the terms
1094 \ldquo{}intensive\rdquo{} and \ldquo{}extensive\rdquo{}
1095 so defined cannot be regarded as
1096 establishing a real physical distinction between the variables.
1097 This distinction is, like the notion of number of degrees of
1098 freedom, in part an anthropomorphic one, because it may depend
1099 on the particular kind of subdivision we choose to imagine. For
1100 example, a volume of air may be imagined to consist of a number
1101 of smaller contiguous volume elements. With this subdivision,
1102 the pressure is the same in all subsystems, and is therefore in
1103 tensive; while the volume is additive and therefore extensive .
1104 But we may equally well regard the volume of air as composed of
1105 its constituent nitrogen and oxygen subsystems (or we could re
1106 gard pure hydrogen as composed of two subsystems, in which the
1107 molecules have odd and even rotational quantum numbers
1108 respectively, etc.) . With this kind of subdivision the volume is the
1109 same in all subsystems, while the pressure is the sum of the
1110 partial pressures of its constituents; and it appears that the
1111 roles of \ldquo{}intensive\rdquo{} and \ldquo{}extensive\rdquo{}
1112 have been interchanged. Note that this ambiguity cannot be removed by requiring
1113 that we consider only spatial subdivisions, such that each sub
1114 system has the same local composi tion . For, consider a s tressed
1115 elastic solid, such as a stretched rubber band. If we imagine
1116 the rubber band as divided, conceptually, into small subsystems
1117 by passing planes through it normal to its axis, then the tension
1118 is the same in all subsystems, while the elongation is additive.
1119 But if the dividing planes are parallel to the axis, the elonga
1120 tion is the same in all subsystems, while the tension is
1121 additive; once again, the roles of \ldquo{}extensive\rdquo{} and
1122 \ldquo{}intensive\rdquo{} are
1123 interchanged merely by imagining a different kind of subdivision.
1124 In spite of the fundamental ambiguity of the usual definitions,
1125 the notions of extensive and intensive variables are useful,
1126 and in practice we seem to have no difficulty in deciding
1127 which quantities should be considered intensive. Perhaps the
1128 distinction is better characterized, not by considering
1129 subdivisions at all, but by adopting a different definition, in which
1130 we recognize that some quantities have the nature of a \ldquo{}force\rdquo{}
1131 or \ldquo{}potential\rdquo{}, or some other local physical property, and are
1132 therefore called intensive, while others have the nature of a
1133 \ldquo{}displacement\rdquo{} or a \ldquo{}quantity\rdquo{} of
1134 something (i.e. are proportional to the size of the system),
1135 and are therefore called extensive. Admittedly, this definition is somewhat vague, in a
1136 way that can also lead to ambiguities ; in any event, let us agree
1137 to class pressure, stress tensor, mass density, energy density,
1138 particle density, temperature, chemical potential, angular
1139 velocity, as intensive, while volume, mass, energy, particle
1140 numbers, strain, entropy, angular momentum, will be considered
1141 extensive.
1143 ** The Kelvin Temperature Scale
1144 The form of the first law,
1145 $dU = dQ - dW$, expresses the net energy increment of a system as
1146 the heat energy supplied to it, minus the work done by it. In
1147 the simplest systems of two degrees of freedom, defined by
1148 pressure and volume as the thermodynamic variables, the work done
1149 in an infinitesimal reversible change of state can be separated
1150 into a product $dW = PdV$ of an intensive and an extensive quantity.
1151 Furthermore, we know that the pressure $P$ is not only the
1152 intensive factor of the work; it is also the \ldquo{}potential\rdquo{}
1153 which governs mechanical equilibrium (in this case, equilibrium with respect
1154 to exchange of volume) between two systems; i .e., if they are
1155 separated by a flexible but impermeable membrane, the two systems
1156 will exchange volume $dV_1 = -dV_2$ in a direction determined by the
1157 pressure difference, until the pressures are equalized. The
1158 energy exchanged in this way between the systems is a product
1159 of the form
1160 #+begin_quote
1161 (/intensity/ of something) \times (/quantity/ of something exchanged)
1162 #+end_quote
1164 Now if heat is merely a particular form of energy that can
1165 also be exchanged between systems, the question arises whether
1166 the quantity of heat energy $dQ$ exchanged in an infinitesimal
1167 reversible change of state can also be written as a product of one
1168 factor which measures the \ldquo{}intensity\rdquo{} of the heat,
1169 times another that represents the \ldquo{}quantity\rdquo{}
1170 of something exchanged between
1171 the systems, such that the intensity factor governs the
1172 conditions of thermal equilibrium and the direction of heat exchange,
1173 in the same way that pressure does for volume exchange.
1176 But we already know that the /temperature/ is the quantity
1177 that governs the heat flow (i.e., heat flows from the hotter to
1178 the cooler body until the temperatures are equalized) . So the
1179 intensive factor in $dQ$ must be essentially the temperature. But
1180 our temperature scale is at present still arbitrary, and we can
1181 hardly expect that such a factorization will be possible for all
1182 calibrations of our thermometers.
1184 The same thing is evidently true of pressure; if instead of
1185 the pressure $P$ as ordinarily defined, we worked with any mono
1186 tonic increasing function $P_1 = P_1 (P)$ we would find that $P_1$ is
1187 just as good as $P$ for determining the direction of volume
1188 exchange and the condition of mechanical equilibrium; but the work
1189 done would not be given by $PdV$; in general, it could not even
1190 be expressed in the form $P_1 \cdot dF(V)$, where $F(V)$ is some function
1191 of V.
1194 Therefore we ask: out of all the monotonic functions $t_1(t)$
1195 corresponding to different empirical temperature scales, is
1196 there one (which we denote as $T(t)$) which forms a \ldquo{}natural\rdquo{}
1197 intensity factor for heat, such that in a reversible change
1198 $dQ = TdS$, where $S(U,V)$ is a new function of the thermodynamic
1199 state? If so, then the temperature scale $T$ will have a great
1200 theoretical advantage, in that the laws of thermodynamics will
1201 take an especially simple form in terms of this particular scale,
1202 and the new quantity $S$, which we call the /entropy/, will be a
1203 kind of \ldquo{}volume\rdquo{} factor for heat.
1205 We recall that $dQ = dU + PdV$ is not an exact differential;
1206 i.e., on a change from one equilibrium state to another the
1207 integral
1209 \[\int_1^2 dQ\]
1211 cannot be set equal to the difference $Q_2 - Q_1$ of values of any
1212 state function $Q(U,V)$, since the integral has different values
1213 for different paths connecting the same initial and final states.
1214 Thus there is no \ldquo{}heat function\rdquo{} $Q(U,V)$, and the notion of
1215 \ldquo{}amount of heat\rdquo{} $Q$ stored in a body has no meaning
1216 (nor does the \ldquo{}amount of work\rdquo{} $W$;
1217 only the total energy is a well-defined quantity).
1218 But we want the entropy $S(U,V)$ to be a definite quantity,
1219 like the energy or volume, and so $dS$ must be an exact differential.
1220 On an infinitesimal reversible change from one equilibrium state
1221 to another, the first law requires that it satisfy[fn:: The first
1222 equality comes from our requirement that $dQ = T\,dS$. The second
1223 equality comes from the fact that $dU = dQ - dW$ (the first law) and
1224 that $dW = PdV$ in the case where the state has two degrees of
1225 freedom, pressure and volume.]
1227 \begin{equation}
1228 dS(U,V) = \frac{dQ}{T} = \frac{dU}{T} + \frac{P}{T}dV
1229 \end{equation}
1231 Thus $(1/T)$ must be an /integrating factor/ which converts $dQ$ into
1232 an exact differential [[fn::A differential $M(x,y)dx +
1233 N(x,y)dy$ is called /exact/ if there is a scalar function
1234 $\Phi(x,y)$ such that $M = \frac{\partial \Phi}{\partial x}$ and
1235 $N=\frac{\partial \Phi}{\partial y}$. If there is, \Phi is called the
1236 /potential function/ of the differential, Conceptually, this means
1237 that M(x,y)dx + N(x,y) dy is the derivative of a scalar potential and
1238 so consequently corresponds to a conservative field.
1240 Even if there is no such potential function
1241 \Phi for the given differential, it is possible to coerce an
1242 inexact differential into an exact one by multiplying by an unknown
1243 function $\mu(x,y)$ (called an /integrating factor/) and requiring the
1244 resulting differential $\mu M\, dx + \mu N\, dy$ to be exact.
1246 To complete the analogy, here we have the differential $dQ =
1247 dU + PdV$ (by the first law) which is not exact---conceptually, there
1248 is no scalar potential nor conserved quantity corresponding to
1249 $dQ$. We have introduced a new differential $dS = \frac{1}{T}dQ$, and we
1250 are searching for the temperature scale $T(U,V)$ which makes $dS$
1251 exact (i.e. which makes $S$ correspond to a conserved quantity). This means
1252 that $\frac{1}{T}$ is playing the role of the integrating factor
1253 \ldquo{}\mu\rdquo{} for the differential $dQ$.]]
1255 Now the question of the existence and properties of
1256 integrating factors is a purely mathematical one, which can be
1257 investigated independently of the properties of any particular
1258 substance. Let us denote this integrating factor for the moment
1259 by $w(U,V) = T^{-1}$; then the first law becomes
1261 \begin{equation}
1262 dS(U,V) = w dU + w P dV
1263 \end{equation}
1265 from which the derivatives are
1267 \begin{equation}
1268 \left(\frac{\partial S}{\partial U}\right)_V = w, \qquad
1269 \left(\frac{\partial S}{\partial V}\right)_U = wP.
1270 \end{equation}
1272 The condition that $dS$ be exact is that the cross-derivatives be
1273 equal, as in (1-4):
1275 \begin{equation}
1276 \frac{\partial^2 S}{\partial U \partial V} = \frac{\partial^2
1277 S}{\partial V \partial U},
1278 \end{equation}
1280 or
1282 \begin{equation}
1283 \left(\frac{\partial w}{\partial V}\right)_U = \left(\frac{\partial
1284 P}{\partial U}\right)_V + P\cdot \left(\frac{\partial w}{\partial U}\right)_V.
1285 \end{equation}
1287 Any function $w(U,V)$ satisfying this differential equation is an
1288 integrating factor for $dQ$.
1290 But if $w(U,V)$ is one such integrating factor, which leads
1291 to the new state function $S(U,V)$, it is evident that
1292 $w_1(U,V) \equiv w \cdot f(S)$ is an equally good integrating factor, where
1293 $f(S)$ is an arbitrary function. Use of $w_1$ will lead to a
1294 different state function
1296 #what's with the variable collision?
1297 \begin{equation}
1298 S_1(U,V) = \int^S f(S) dS
1299 \end{equation}
1301 The mere conversion of into an exact differential is, therefore,
1302 not enough to determine any unique entropy function $S(U,V)$.
1303 However, the derivative
1305 \begin{equation}
1306 \left(\frac{dU}{dV}\right)_S = -P
1307 \end{equation}
1309 is evidently uniquely determined; so also, therefore, is the
1310 family of lines of constant entropy, called /adiabats/, in the
1311 $(U-V)$ plane. But, as (1-24) shows, the numerical value of $S$ on
1312 each adiabat is still completely undetermined.
1314 In order to fix the relative values of $S$ on different
1315 adiabats we need to add the condition, not yet put into the equations,
1316 that the integrating factor $w(U,V) = T^{-1}$ is to define a new
1317 temperature scale . In other words, we now ask: out of the
1318 infinite number of different integrating factors allowed by
1319 the differential equation (1-23), is it possible to find one
1320 which is a function only of the empirical temperature $t$? If
1321 $w=w(t)$, we can write
1323 \begin{equation}
1324 \left(\frac{\partial w}{\partial V}\right)_U = \frac{dw}{dt}\left(\frac{\partial
1325 t}{\partial V}\right)_U
1326 \end{equation}
1327 \begin{equation}
1328 \left(\frac{\partial w}{\partial U}\right)_V = \frac{dw}{dt}\left(\frac{\partial
1329 t}{\partial U}\right)_V
1330 \end{equation}
1333 and (1-23) becomes
1334 \begin{equation}
1335 \frac{d}{dt}\log{w} = \frac{\left(\frac{\partial P}{\partial
1336 U}\right)_V}{\left(\frac{\partial t}{\partial V}\right)_U-P\left(\frac{\partial t}{\partial U}\right)_V}
1337 \end{equation}
1340 which shows that $w$ will be determined to within a multiplicative
1341 factor.
1343 Is the temperature scale thus defined independent of the
1344 empirical scale from which we started? To answer this, let
1345 $t_1 = t_1(t)$ be any monotonic function which defines a different
1346 empirical temperature scale. In place of (1-28), we then have
1348 \begin{equation}
1349 \frac{d}{dt_1}\log{w} \quad=\quad \frac{\left(\frac{\partial P}{\partial
1350 U}\right)_V}{\left(\frac{\partial t_1}{\partial V}\right)_U-P\left(\frac{\partial t_1}{\partial U}\right)_V}
1351 \quad = \quad
1352 \frac{\left(\frac{\partial P}{\partial
1353 U}\right)_V}{\frac{dt_1}{dt}\left[ \left(\frac{\partial t}{\partial
1354 V}\right)_U-P\left(\frac{\partial t}{\partial U}\right)_V\right]},
1355 \end{equation}
1356 or
1357 \begin{equation}
1358 \frac{d}{dt_1}\log{w_1} = \frac{dt}{dt_1}\frac{d}{dt}\log{w}
1359 \end{equation}
1361 which reduces to $d \log{w_1} = d \log{w}$, or
1362 \begin{equation}
1363 w_1 = C\cdot w
1364 \end{equation}
1366 Therefore, integrating factors derived from whatever empirical
1367 temperature scale can differ among themselves only by a
1368 multiplicative factor. For any given substance, therefore, except
1369 for this factor (which corresponds just to our freedom to choose
1370 the size of the units in which we measure temperature), there is
1371 only /one/ temperature scale $T(t) = 1/w$ with the property that
1372 $dS = dQ/T$ is an exact differential.
1374 To find a feasible way of realizing this temperature scale
1375 experimentally, multiply numerator and denominator of the right
1376 hand side of (1-28) by the heat capacity at constant volume,
1377 $C_V^\prime = (\partial U/\partial t) V$, the prime denoting that
1378 it is in terms of the empirical temperature scale $t$.
1379 Integrating between any two states denoted 1 and 2, we have
1381 \begin{equation}
1382 \frac{T_1}{T_2} = \exp\left\{\int_{t_1}^{t_2}
1383 \frac{\left(\frac{\partial P}{\partial t}\right)_V dt}{P - C_V^\prime
1384 \left(\frac{\partial t}{\partial V}\right)_U} \right\}
1385 \end{equation}
1387 If the quantities on the right-hand side have been determined
1388 experimentally, then a numerical integration yields the ratio
1389 of Kelvin temperatures of the two states.
1391 This process is particularly simple if we choose for our
1392 system a volume of gas with the property found in Joule's famous
1393 expansion experiment; when the gas expands freely into a vacuum
1394 (i.e., without doing work, or $U = \text{const.}$), there is no change in
1395 temperature. Real gases when sufficiently far from their condensation
1396 points are found to obey this rule very accurately.
1397 But then
1399 \begin{equation}
1400 \left(\frac{dt}{dV}\right)_U = 0
1401 \end{equation}
1403 and on a change of state in which we heat this gas at constant
1404 volume, (1-31) collapses to
1406 \begin{equation}
1407 \frac{T_1}{T_2} = \exp\left\{\int_{t_1}^{t_2}
1408 \frac{1}{P}\left(\frac{\partial P}{\partial t}\right)_V dt\right\} = \frac{P_2}{P_1}.
1409 \end{equation}
1411 Therefore, with a constant-volume ideal gas thermometer, (or more
1412 generally, a thermometer using any substance obeying (1-32) and
1413 held at constant volume), the measured pressure is directly
1414 proportional to the Kelvin temperature.
1416 For an imperfect gas, if we have measured $(\partial t /\partial
1417 V)_U$ and $C_V^\prime$, Eq. (1-31) determines the necessary
1418 corrections to (1-33). However, an alternative form of (1-31), in
1419 which the roles of pressure and volume are interchanged, proves to be
1420 more convenient for experimental determinations. To derive it, introduce the
1421 enthalpy function
1423 \begin{equation}H = U + PV\end{equation}
1425 with the property
1427 \begin{equation}
1428 dH = dQ + VdP
1429 \end{equation}
1431 Equation (1-19) then becomes
1433 \begin{equation}
1434 dS = \frac{dH}{T} - \frac{V}{T}dP.
1435 \end{equation}
1437 Repeating the steps (1-20) to (1-31) of the above derivation
1438 starting from (1-36) instead of from (1-19), we arrive at
1440 \begin{equation}
1441 \frac{T_2}{T_1} = \exp\left\{\int_{t_1}^{t_2}
1442 \frac{\left(\frac{dV}{dt}\right)_P dt}{V + C_P^\prime
1443 \left(\frac{\partial t}{\partial P}\right)_H}\right\}
1444 \end{equation}
1446 or
1448 \begin{equation}
1449 \frac{T_2}{T_1} = \exp\left\{\frac{\alpha^\prime
1450 dt}{1+\left(C_P^\prime \cdot \mu^\prime / V\right)}\right\}
1451 \end{equation}
1453 where
1454 \begin{equation}
1455 \alpha^\prime \equiv \frac{1}{V}\left(\frac{\partial V}{\partial t}\right)_P
1456 \end{equation}
1457 is the thermal expansion coefficient,
1458 \begin{equation}
1459 C_P^\prime \equiv \left(\frac{\partial H}{\partial t}\right)_P
1460 \end{equation}
1461 is the heat capacity at constant pressure, and
1462 \begin{equation}
1463 \mu^\prime \equiv \left(\frac{dt}{dP}\right)_H
1464 \end{equation}
1466 is the coefficient measured in the Joule-Thompson porous plug
1467 experiment, the primes denoting again that all are to be measured
1468 in terms of the empirical temperature scale $t$.
1469 Since $\alpha^\prime$, $C_P^\prime$, $\mu^\prime$ are all
1470 easily measured in the laboratory, Eq. (1-38) provides a
1471 feasible way of realizing the Kelvin temperature scale experimentally,
1472 taking account of the imperfections of real gases.
1473 For an account of the work of Roebuck and others based on this
1474 relation, see [[http://books.google.com/books?id=KKJKAAAAMAAJ][Zemansky (1943)]]; pp. 252-255.
1476 Note that if $\mu^\prime = O$ and we heat the gas at constant
1477 pressure, (1-38) reduces to
1479 \begin{equation}
1480 \frac{T_2}{T_1} = \exp\left\{ \int_{t_1}^{t_2}
1481 \frac{1}{V}\left(\frac{\partial V}{\partial t}\right)_P dt \right\} = \frac{V_2}{V_1}
1482 \end{equation}
1484 so that, with a constant-pressure gas thermometer using a gas for
1485 which the Joule-Thomson coefficient is zero, the Kelvin temperature is
1486 proportional to the measured volume.
1488 Now consider another empirical fact, [[http://en.wikipedia.org/wiki/Boyle%27s_law][Boyle's law]]. For gases
1489 sufficiently far from their condensation points---which is also
1490 the condition under which (1-32) is satisfied---Boyle found that
1491 the product $PV$ is a constant at any fixed temperature. This
1492 product is, of course proportional to the number of moles $n$
1493 present, and so Boyle's equation of state takes the form
1495 \begin{equation}PV = n \cdot f(t)\end{equation}
1497 where f(t) is a function that depends on the particular empirical
1498 temperature scale used. But from (1-33) we must then have
1499 $f(t) = RT$, where $R$ is a constant, the universal gas constant whose
1500 numerical value (1.986 calories per mole per degree K) , depends
1501 on the size of the units in which we choose to measure the Kelvin
1502 temperature $T$. In terms of the Kelvin temperature, the ideal gas
1503 equation of state is therefore simply
1505 \begin{equation}
1506 PV = nRT
1507 \end{equation}
1510 The relations (1-32) and (1-44) were found empirically, but
1511 with the development of thermodynamics one could show that they
1512 are not logically independent. In fact, all the material needed
1513 for this demonstration is now at hand, and we leave it as an
1514 exercise for the reader to prove that Joule‘s relation (1-32) is
1515 a logical consequence of Boyle's equation of state (1-44) and the
1516 first law.
1519 Historically, the advantages of the gas thermometer were
1520 discovered empirically before the Kelvin temperature scale was
1521 defined; and the temperature scale \theta defined by
1523 \begin{equation}
1524 \theta = \lim_{P\rightarrow 0}\left(\frac{PV}{nR}\right)
1525 \end{equation}
1527 was found to be convenient, easily reproducible, and independent
1528 of the properties of any particular gas. It was called the
1529 /absolute/ temperature scale; and from the foregoing it is clear
1530 that with the same choice of the numerical constant $R$, the
1531 absolute and Kelvin scales are identical.
1534 For many years the unit of our temperature scale was the
1535 Centigrade degree, so defined that the difference $T_b - T_f$ of
1536 boiling and freezing points of water was exactly 100 degrees.
1537 However, improvements in experimental techniques have made another
1538 method more reproducible; and the degree was redefined by the
1539 Tenth General Conference of Weights and Measures in 1954, by
1540 the condition that the triple point of water is at 273.l6^\circ K,
1541 this number being exact by definition. The freezing point, 0^\circ C,
1542 is then 273.15^\circ K. This new degree is called the Celsius degree.
1543 For further details, see the U.S. National Bureau of Standards
1544 Technical News Bulletin, October l963.
1547 The appearance of such a strange and arbitrary-looking
1548 number as 273.16 in the /definition/ of a unit is the result of
1549 the historical development, and is the means by which much
1550 greater confusion is avoided. Whenever improved techniques make
1551 possible a new and more precise (i.e., more reproducible)
1552 definition of a physical unit, its numerical value is of course chosen
1553 so as to be well inside the limits of error with which the old
1554 unit could be defined. Thus the old Centigrade and new Celsius
1555 scales are the same, within the accuracy with which the
1556 Centigrade scale could be realized; so the same notation, ^\circ C, is used
1557 for both . Only in this way can old measurements retain their
1558 value and accuracy, without need of corrections every time a
1559 unit is redefined.
1561 #capitalize Joules?
1562 Exactly the same thing has happened in the definition of
1563 the calorie; for a century, beginning with the work of Joule,
1564 more and more precise experiments were performed to determine
1565 the mechanical equivalent of heat more and more accurately . But
1566 eventually mechanical and electrical measurements of energy be
1567 came far more reproducible than calorimetric measurements; so
1568 recently the calorie was redefined to be 4.1840 Joules, this
1569 number now being exact by definition. Further details are given
1570 in the aforementioned Bureau of Standards Bulletin.
1573 The derivations of this section have shown that, for any
1574 particular substance, there is (except for choice of units) only
1575 one temperature scale $T$ with the property that $dQ = TdS$ where
1576 $dS$ is the exact differential of some state function $S$. But this
1577 in itself provides no reason to suppose that the /same/ Kelvin
1578 scale will result for all substances; i.e., if we determine a
1579 \ldquo{}helium Kelvin temperature\rdquo{} and a
1580 \ldquo{}carbon dioxide Kelvin temperature\rdquo{} by the measurements
1581 indicated in (1-38), and choose the units so that they agree numerically at one point, will they then
1582 agree at other points? Thus far we have given no reason to
1583 expect that the Kelvin scale is /universal/, other than the empirical
1584 fact that the limit (1-45) is found to be the same for all gases.
1585 In section 2.0 we will see that this universality is a conse
1586 quence of the second law of thermodynamics (i.e., if we ever
1587 find two substances for which the Kelvin scale as defined above
1588 is different, then we can take advantage of this to make a
1589 perpetual motion machine of the second kind).
1592 Usually, the second law is introduced before discussing
1593 entropy or the Kelvin temperature scale. We have chosen this
1594 unusual order so as to demonstrate that the concepts of entropy
1595 and Kelvin temperature are logically independent of the second
1596 law; they can be defined theoretically, and the experimental
1597 procedures for their measurement can be developed, without any
1598 appeal to the second law. From the standpoint of logic, there
1599 fore, the second law serves /only/ to establish that the Kelvin
1600 temperature scale is the same for all substances.
1603 ** COMMENT Entropy of an Ideal Boltzmann Gas
1605 At the present stage we are far from understanding the physical
1606 meaning of the function $S$ defined by (1-19); but we can investigate its mathematical
1607 form and numerical values. Let us do this for a system con
1608 sisting cf n moles of a substance which obeys the ideal gas
1609 equation of state
1610 and for which the heat capacity at constant volume CV is a
1611 constant. The difference in entropy between any two states (1)
1612 and (2) is from (1-19),
1615 where we integrate over any reversible path connecting the two
1616 states. From the manner in which S was defined, this integral
1617 must be the same whatever path we choose. Consider, then, a
1618 path consisting of a reversible expansion at constant tempera
1619 ture to a state 3 which has the initial temperature T, and the
1620 .L ' "'1 final volume V2; followed by heating at constant volume to the final temperature T2. Then (1-47) becomes
1621 3 2 I If r85 - on - db — = d — -4 S2 51 J V [aT]v M (1 8)
1622 1 3
1623 To evaluate the integral over (1 +3) , note that since
1624 dU = T :15 — P dV, the Helmholtz free energy function F E U — TS
1625 has the property dF = --S - P 61V; and of course is an exact
1626 differential since F is a definite state function. The condition
1627 that dF be exact is, analogous to (1-22),
1628 which is one of the Maxwell relations, discussed further in
1629 where CV is the molar heat capacity at constant volume. Collec
1630 ting these results, we have
1632 l 3
1633 1 nR log(V2/V1) + nCV log(T2/Tl) (1-52)
1634 since CV was assumed independent of T. Thus the entropy function
1635 must have the form
1636 S(n,V,T) = nR log V + n CV log T + (const.) (l~53)
1639 From the derivation, the additive constant must be independent
1640 of V and T; but it can still depend on n. We indicate this by
1641 writing
1642 where f (n) is a function not determined by the definition (1-47).
1643 The form of f (n) is , however, restricted by the condition that
1644 the entropy be an extensive quantity; i .e . , two identical systems
1645 placed together should have twice the entropy of a single system;
1646 Substituting (l—-54) into (1-55), we find that f(n) must satisfy
1647 To solve this, one can differentiate with respect to q and set
1648 q = 1; we then obtain the differential equation
1649 n f ' (n) — f (n) + Rn = 0 (1-57)
1650 which is readily solved; alternatively, just set n = 1 in (1-56)
1651 and replace q by n . By either procedure we find
1652 f (n) = n f (1) — Rn log n . (1-58)
1653 As a check, it is easily verified that this is the solution of
1654 where A E f (l) is still an arbitrary constant, not determined
1655 by the definition (l—l9) , or by the condition (l-55) that S be
1656 extensive. However, A is not without physical meaning; we will
1657 see in the next Section that the vapor pressure of this sub
1658 stance (and more generally, its chemical potential) depends on
1659 A. Later, it will appear that the numerical value of A involves
1660 Planck's constant, and its theoretical determination therefore
1661 requires quantum statistics .
1662 We conclude from this that, in any region where experi
1663 mentally CV const. , and the ideal gas equation of state is
1666 obeyed, the entropy must have the form (1-59) . The fact that
1667 classical statistical mechanics does not lead to this result,
1668 the term nR log (l/n) being missing (Gibbs paradox) , was his
1669 torically one of the earliest clues indicating the need for the
1670 quantum theory.
1671 In the case of a liquid, the volume does not change appre
1672 ciably on heating, and so d5 = n CV dT/T, and if CV is indepen
1673 dent of temperature, we would have in place of (1-59) ,
1674 where Ag is an integration constant, which also has physical
1675 meaning in connection with conditions of equilibrium between
1676 two different phases.
1677 1.1.0 The Second Law: Definition. Probably no proposition in
1678 physics has been the subject of more deep and sus tained confusion
1679 than the second law of thermodynamics . It is not in the province
1680 of macroscopic thermodynamics to explain the underlying reason
1681 for the second law; but at this stage we should at least be able
1682 to state this law in clear and experimentally meaningful terms.
1683 However, examination of some current textbooks reveals that,
1684 after more than a century, different authors still disagree as
1685 to the proper statement of the second law, its physical meaning,
1686 and its exact range of validity.
1687 Later on in this book it will be one of our major objectives
1688 to show, from several different viewpoints , how much clearer and
1689 simpler these problems now appear in the light of recent develop
1690 ments in statistical mechanics . For the present, however, our
1691 aim is only to prepare the way for this by pointing out exactly
1692 what it is that is to be proved later. As a start on this at
1693 tempt, we note that the second law conveys a certain piece of
1694 informations about the direction in which processes take place.
1695 In application it enables us to predict such things as the final
1696 equilibrium state of a system, in situations where the first law
1697 alone is insufficient to do this.
1698 A concrete example will be helpful. We have a vessel
1699 equipped with a piston, containing N moles of carbon dioxide.
1702 The system is initially at thermal equilibrium at temperature To, volume V0 and pressure PO; and under these conditions it contains
1703 n moles of CO2 in the vapor phase and moles in the liquid
1704 phase . The system is now thermally insulated from its surround
1705 ings, and the piston is moved rapidly (i.e. , so that n does not
1706 change appreciably during the motion) so that the system has a
1707 new volume Vf; and immediately after the motion, a new pressure
1708 PI . The piston is now held fixed in its new position , and the
1709 system allowed to come once more to equilibrium. During this
1710 process, will the CO2 tend to evaporate further, or condense further? What will be the final equilibrium temperature Teq, the final pressure PeCE , and final value of n eq?
1711 It is clear that the firs t law alone is incapable of answering
1712 these questions; for if the only requirement is conservation of
1713 energy, then the CO2 might condense , giving up i ts heat of vapor
1714 ization and raising the temperature of the system; or it might
1715 evaporate further, lowering the temperature. Indeed, all values
1716 of neq in O i neq i N would be possible without any violation of the first law. In practice, however, this process will be found
1717 to go in only one direction and the sys term will reach a definite
1718 final equilibrium state with a temperature, pressure, and vapor
1719 density predictable from the second law.
1720 Now there are dozens of possible verbal statements of the
1721 second law; and from one standpoint, any statement which conveys
1722 the same information has equal right to be called "the second
1723 law." However, not all of them are equally direct statements of
1724 experimental fact, or equally convenient for applications, or
1725 equally general; and it is on these grounds that we ought to
1726 choose among them .
1727 Some of the mos t popular statements of the s econd law be
1728 long to the class of the well-—known "impossibility" assertions ;
1729 i.e. , it is impossible to transfer heat from a lower to a higher
1730 temperature without leaving compensating changes in the rest of
1731 the universe , it is imposs ible to convert heat into useful work
1732 without leaving compensating changes, it is impossible to make
1733 a perpetual motion machine of the second kind, etc.
1736 Suoh formulations have one clear logical merit; they are
1737 stated in such a way that, if the assertion should be false, a
1738 single experiment would suffice to demonstrate that fact conclu
1739 sively. It is good to have our principles stated in such a
1740 clear, unequivocal way.
1741 However, impossibility statements also have some disadvan
1742 tages . In the first place, their_ are not, and their very
1743 nature cannot be, statements of eiperimental fact. Indeed, we
1744 can put it more strongly; we have no record of anyone having
1745 seriously tried to do any of the various things which have been
1746 asserted to be impossible, except for one case which actually
1747 succeeded‘. In the experimental realization of negative spin
1748 temperatures , one can transfer heat from a lower to a higher
1749 temperature without external changes; and so one of the common
1750 impossibility statements is now known to be false [for a clear
1751 discussion of this, see the article of N. F . Ramsey (1956) ;
1752 experimental details of calorimetry with negative temperature
1753 spin systems are given by Abragam and Proctor (1958) ] .
1754 Finally, impossibility statements are of very little use in
1755 applications of thermodynamics; the assertion that a certain kind
1756 of machine cannot be built, or that a -certain laboratory feat
1757 cannot be performed, does not tell me very directly whether my
1758 carbon dioxide will condense or evaporate. For applications,
1759 such assertions must first be converted into a more explicit
1760 mathematical form.
1761 For these reasons, it appears that a different kind of
1762 statement of the second law will be, not necessarily more
1763 "correct,” but more useful in practice. Now both Clausius (3.875)
1764 and Planck (1897) have laid great stress on their conclusion
1765 that the most general statement, and also the most immediately
1766 useful in applications, is simply the existence of a state
1767 function, called the entropy, which tends to increase. More
1768 precisely: in an adiabatic change of state, the entropy of
1769 a system may increase or may remain constant, but does not
1770 decrease. In a process involving heat flow to or from the
1771 system, the total entropy of all bodies involved may increase
1774 or may remain constant; but does not decrease; let us call this
1775 the “weak form" of the second law.
1776 The weak form of the second law is capable of answering the
1777 first question posed above; thus the carbon dioxide will evapo
1778 rate further if , and only if , this leads to an increase in the
1779 total entropy of the system . This alone , however , is not enough
1780 to answer the second question; to predict the exact final equili
1781 brium state, we need one more fact.
1782 The strong form of the second law is obtained by adding the
1783 further assertion that the entropy not only “tends" to increase;
1784 in fact it will increase, to the maximum value permitted E2 the
1785 constraints imposed.* In the case of the carbon dioxide, these
1786 constraints are: fixed total energy (first law) , fixed total
1787 amount of carbon dioxide , and fixed position of the piston . The
1788 final equilibrium state is the one which has the maximum entropy
1789 compatible with these constraints , and it can be predicted quan
1790 titatively from the strong form of the second law if we know,
1791 from experiment or theory, the thermodynamic properties of carbon
1792 dioxide (i .e . , heat capacity , equation of state , heat of vapor
1793 ization) .
1794 To illus trate this , we set up the problem in a crude ap
1795 proximation which supposes that (l) in the range of conditions
1796 of interest, the molar heat capacity CV of the vapor, and C2 of
1797 the liquid, and the molar heat of vaporization L, are all con
1798 stants, and the heat capacities of cylinder and piston are neg
1799 ligible; (2) the liquid volume is always a small fraction of the
1800 total V, so that changes in vapor volume may be neglected; (3) the
1801 vapor obeys the ideal gas equation of state PV = nRT. The in
1802 ternal energy functions of liquid and vapor then have the form
1803 UPb = + A} (1-61)
1804 T T U = n‘ C '1‘ A + L] (1-62)
1805 v , v
1806 where A is a constant which plays no role in the problem. The
1807 appearance of L in (1-62) recognizes that the zero from which we
1808 *Note , however , that the second law has nothing to say about how rapidly this approach to equilibrium takes place.
1811 measure energy of the vapor is higher than that of the liquid by
1812 the energy L necessary to form the vapor. On evaporation of dn
1813 moles of liquid, the total energy increment is (ill = + dUV= O,
1814 or
1815 [n CV [(CV — CQ)T + = O (l—63)
1816 which is the constraint imposed by the first law. As we found
1817 previously (l~59) , (1-60) the entropies of vapor and liquid are
1818 given by
1819 S = n [C 1n T + R ln (V/n) + A ] (1-64)
1820 v v v
1821 where AV, ASL are the constants of integration discussed in the
1822 Si
1823 last Section.
1824 We leave it as an exercise for the reader to complete the
1825 derivation from this point , and show that the total entropy
1826 S = 82 + SV is maximized subject to the constraint (1-6 3) , when
1828 the values 11 , T are related by
1829 eq eq
1830 Equation (1-66) is recognized as an approximate form of the Vapor
1831 pressure formula .
1832 We note that AQ, AV, which appeared first as integration
1833 constants for the entropy with no parti cular physical meaning ,
1834 now play a role in determining the vapor pressure.
1835 l.ll The Second Law: Discussion. We have emphasized the dis
1836 tinction between the weak and strong forms of the second law
1837 because (with the exception of Boltzmann ' s original unsuccessful
1838 argument based on the H—theorem) , most attempts to deduce the
1839 second law from statis tical mechanics have considered only the
1840 weak form; whereas it is evidently the strong form that leads
1841 to definite quantitative predictions, and is therefore needed
1845 * COMMENT Appendix
1847 | Generalized Force | Generalized Displacement |
1848 |--------------------+--------------------------|
1849 | force | displacement |
1850 | pressure | volume |
1851 | electric potential | charge |