1988
JOHN MADDEN FOOTBALL VIDEO GAME
John Madden prided himself on being an old-school kind of guy. As a television announcer, he was the oracle of the offensive line.1 He liked football in the mud and in the cold. His favorite word was “Boom!”
But he was old school, not old-fashioned. Madden was always more than willing to try new techniques on the field, where he won 75 percent of his games—and a Super Bowl—as head coach of the Oakland Raiders. During his 29-year television career, he was one of the first to do live diagrams of plays.
And that helps to explain why, in 1988, Madden became one of the biggest names in video games. In 1984 the head of Electronic Arts (EA), Trip Hawkins, asked Madden to lend his name to a game the company was planning. But the old-school part of Madden hated what EA showed him. Because of technological limitations, the creators could only have seven on a side. “If it’s not 11-on-11,” Madden stated, “it’s not real football.”2 What’s more, the demos “didn’t have a lot of line play.”3 He would have no part of it. Several years later, computer capabilities had improved enough to meet Madden’s standards. In the meantime, the game became known inside EA as an albatross: “Trip’s Folly.”
But this albatross would fly. Released for the Apple II in 1988, the first version of John Madden Football looks laughable now, with clunky cartoon figures bouncing across a dizzying green-and-white field.4 But it was still the most realistic and sophisticated football game on the market.5 Based on plays derived from NFL playbooks, each player was rated, including against other players, and the results of one play affected what happened on the next. It sold well enough that EA released a newer, better John Madden Football in 1989. This one was designed to work with a new console Sega was producing, the Genesis, which had two joysticks and faster processing. That game—informed by Madden’s knowledge and attention to detail—and the console gave birth to the modern sports video game market.
Madden continues to be involved. Developers pick his brain about how to incorporate new kinds of action, and he critiques new plays.6 The only other constant has been change, which is why Madden NFL (as it has been known since 1993) has stayed in the game.7 Today’s versions are immeasurably more sophisticated. The players look like players, not Pac-Men in helmets, and users can craft their own rosters and make their own plays. Data are updated constantly, and everything is interactive.
As a result, Madden NFL has become imbedded not only in gaming culture, but also in football culture. The graphics and sound became increasingly sophisticated, and TV began to take tips from the video game, learning how to inject more color and flash.8 Coaches, too, have taken note. The kinds of things that can be done on Madden NFL 16 add up to a virtual-reality testing ground. The game is a staple of NFL training camps, and there is even a “Madden NFL curse”: several players who were featured on the game cover, such as Daunte Culpepper, Peyton Hillis, and Michael Vick, later hit rough patches. Coincidence? Maybe.
1989
PETE ROSE AUTOGRAPHED BASEBALL
T
agged with the derisive nickname “Charlie Hustle” as a rookie, Pete Rose embraced the idea instead, running to first on a walk and flying around the bases with abandon. A career .303 hitter and a key cog in the great Big Red Machine of the 1970s, he made 17 all-star teams, won three batting titles, earned two Gold Gloves, and was voted most valuable player in 1973. His 44-game hitting streak in 1978 was the second-longest in the modern era. He played more games than anyone in history, hit more singles, and made more outs.1 But to describe Pete Rose by the numbers is like painting by numbers; you get a broad view of an image, but it’s crude.
Rose will always be known for three things. First, he loved baseball, and fans loved the way he played it, seeing in his chunky body and not entirely grammatical speech the gritty overachiever he was. His uniform was always dirty, and he would do anything to win. When he left his hometown Cincinnati Reds for the Philadelphia Phillies in 1979, he took his winning spirit with him. The Phillies won their first World Series in 1980, and Hall of Fame third basemen Mike Schmidt would say, “Rose made the difference.”2
Second, on September 11, 1985, as Cincinnati’s player-manager, he lined a clean single to left center for his 4,192st hit, surpassing Ty Cobb for the major league record. He retired with 4,256 of them.
Third, there was the trauma and ugliness of his eviction from the game. As a ballplayer, Rose had a precise sense of what he could and could not do. Off the field, his balance was not as refined. A serious student of the life of Ty Cobb (and the father of a son named Ty), Rose was well aware that the First Commandment of baseball is “Thou shalt not bet.” Moreover, every clubhouse has a warning to that effect. But he did it anyway, associating with a crew of uncharming lowlifes to liaise with gamblers.
When Commissioner Bart Giamatti hired an investigator, John Dowd, to look into allegations against Rose, Dowd found significant evidence that Rose, denials to the contrary, had indeed bet on the Reds while he was playing and managing. Rose and Giamatti made a deal that major league baseball would not formally declare that he had bet on baseball, but Rose would agree that baseball had “a factual basis to the penalty imposed on him.”3 Rose would be placed on the ineligible list—the punishment set out in Rule 21 (d)4—and could apply for reinstatement after at least a year. Eight days later, Giamatti died.
For more than 14 years Rose continued to deny that he had bet on baseball. In 2004 he confessed, sort of. In My Prison Without Bars, one of the worst sports memoirs ever written—and that is a rich vein to mine—he admitted that he had bet, but never on the Reds to lose. “I’m sure that I’m supposed to act all sorry or sad or guilty now that I’ve accepted that I’ve done something wrong,” he wrote. “But you see, I’m just not built that way.”5 The gracelessness of the apology, if that is what it was, did not impress. Rose continues to be barred from the game and from the Hall of Fame.
The Hit King, as Rose styles himself, has long made his living by signing his autograph at card shows. He charges a premium to write one sentence—“I’m sorry I bet on baseball.”
1992
CAMDEN YARDS
C
amden Yards revolutionized the modern architecture of baseball. In fact, “counterrevolution” may be the better term, because it represented a return to older design principles: build downtown, for baseball only, with seats close to the field, and to reflect and honor the urban landscape.
Those were the hallmarks of America’s pioneer modern ballparks, Shibe Park in Philadelphia and Forbes Field in Pittsburgh, which both opened in 1909. They were constructed of steel and concrete, a big improvement over the earlier jury-rigged wooden firetraps held together by spit and chewing gum. Shibe Park was a beaux arts masterpiece where owner/manager Connie Mack had his office in a cupola. Forbes was tucked into the city’s Oakland neighborhood with understated style.
Pittsburgh, to its eternal credit, has preserved Forbes Field’s home plate, imbedded in the floor of Posvar Hall at the University of Pittsburgh, and a large portion of the outfield wall, complete with ivy and the distance markings—an incredible 457 feet to left-center. Pilgrims can thus pay homage to one of baseball’s greatest moments: Bill Mazeroski’s home run to win game seven of the 1960 World Series. Shibe Park, abandoned in 1970 and razed six years later, has vanished altogether, save for a few bricks. Phillies Hall of Fame outfielder and broadcaster Richie Ashburn almost melted when he thought of the place: “It looked like a ballpark. It smelled like a ballpark. It had a feeling and a heartbeat, a personality that was all baseball.”1
Both cities replaced these urban icons with ghastly, multipurpose stadiums featuring artificial turf and cutouts around the bases. Described perfectly by baseball writer Roger Angell as “grassless and graceless,”2 these cookie-cutter fields of screams wore out their welcome quickly, but not before other cities followed suit, including great baseball towns that really should have known better, such as St. Louis and Cincinnati. Houston, Minnesota, Montreal, Seattle, and Toronto played in domes, which were even worse. Even the stadiums that didn’t follow the blueprint just weren’t very good. San Francisco’s Candlestick Park was notorious for swirling winds that made it almost unplayable at times. Shea Stadium could rock, but it aged quickly, and not well.
Ironically, Baltimore’s predecessor to Camden Yards, Memorial Stadium, was not bad at all: excellent atmosphere, a good field, pretty good sightlines. It was homey. But the economics of baseball were changing, and Memorial could not deliver them; the Orioles had had their eye on a downtown field for years.
After flirting with some seriously banal ideas, and prodded by Baltimore Orioles officials, the designers hearkened back to the golden age of baseball architecture, 1909 to 1923,3 going as far as measuring the angles and distances of the old parks to discern their golden mean. After distilling the best of the old, they added modern improvements. Some of the old parks had vast foul territory, for example, which meant a lot of cheap outs and that fans were far from the action. Camden Yards kept them close. Plumbing, amenities, training facilities, and food were much better. There are no view-blocking columns.
The field is walkable from downtown and just blocks from Babe Ruth’s birthplace. Somewhere in center field his father ran a saloon;4 the Ruth family privy was discovered during excavations.5 The architects honored the city’s industrial roots by incorporating the former B & O warehouse, the longest building on the East Coast, into the design. That decision was controversial at the time;6 in retrospect, it was the genius of the obvious. Camden Yards would be much, much less without the warehouse dominating the right field skyline.
It’s true that the expensive luxury boxes may draw the wrong kind of customer. And there were some other imperfections, such as seats that didn’t orient toward home plate and an out-of-scale video screen.7 On the whole, though, the place was a miracle. Likeable and functional, it was a throwback that didn’t reek of preciousness. It just worked.
And from the moment it opened in 1992, everyone in baseball recognized that.8 With too many cities proving amenable to throwing tax dollars at baseball billionaires, a building boom ensued. Since 1992, 20 major league ballparks have been built; 18 of them reflect the Camden Yards imprint.9 Brick is big. Quirkiness is carefully designed. Steel trusses are de rigeur.
Camden Yards was so influential it arguably made retro the new cookie-cutter. (To visit the homes of the Washington Nationals and the New York Mets is to feel déjà vu all over again.) Even so, the fact remains that these parks are much better than what they replaced and are all good places to watch, and play, the game.
Camden Yards itself has seen more mediocrity than greatness on the field, but it has hosted two truly historic moments. The first occurred on September 6, 1995, when Cal Ripken played his 2,131st consecutive game, breaking Lou Gehrig’s record. Naturally the hometown hero hit a home run—and Eddie Murray, one of Ripken’s closest friends, hit his 500th in the same game.
The second happened on April 29, 2015, when the Orioles and Chicago White Sox played a game with an official attendance of zero. In the wake of a death in police custody, unrest and disturbances had been flaring up around Baltimore for days. Camden Yards was not immune. A few days before, fans had been locked in after a game due to what the scoreboard referred to as “an ongoing public safety issue outside the park.”10 Because city leaders were skittish about a big public gathering at such a fraught time, the teams agreed to play in an empty stadium. There can be no better, or worse, demonstration that the Elysian fields of sports can never be separated from what happens outside the lines.
1993
JACKIE JOYNER-KERSEE’S SHOE
“S
omeday, this little girl is going to be the First Lady of something,” the matriarch said when her granddaughter was born in 1962.1 Such predictions are common from grandparents. This one, however, came true, because that infant, Jacqueline Joyner-Kersee, became the first lady of track-and-field.
And perhaps more than that. Although it’s tricky—indeed, impossible—to compare athletes across generations, either Babe Didrikson Zaharias or Jackie Joyner-Kersee is surely America’s greatest female athlete. The former was accomplished in more sports; the latter excelled in two sports in an era when the competition was much tougher. Call it a toss-up.
Joyner-Kersee was born in East St. Louis when it was a bustling blue-collar town. But the factories closed, and the middle class fled. When she was growing up, it was, and remains, a very tough place. But she had strict parents—a 10:00 p.m. curfew, no makeup, and no dating until she was 16. “The complete list of [my mother’s] rules would fill this book,”2 Joyner-Kersee recalled ruefully in her autobiography. In addition, a nearby community center3 provided activities, including dance, art, a library, and sports for the active youngster. She also found a good coach, Nino Fennoy, at a young age. All this provided a foundation for excellence. In high school she was a top student as well as captain of the basketball and volleyball teams. She earned a hoops scholarship to the University of California Los Angeles, where she insisted that she also be allowed to compete in track.
Joyner-Kersee was a gifted basketball player, starting for four years; like Didrikson, she was named an all-American. But she had more room to grow on the track. Entering UCLA as a long jumper, at the suggestion of assistant coach (and future husband) Bob Kersee, she took up the heptathlon. She would never give up the long jump; she adored the purity of it. A one-time world record holder, she won an Olympic gold medal in the event in 1988, plus bronze in 1992 and 1996. Her personal best of 24 feet, 6¾ inches is still the American standard.
But it was in the heptathlon that Joyner-Kersee found her true calling. The event has seven elements—the 100-meter hurdles, high jump, shot put, 200 meters, long jump, javelin, and 800 meters—and the variety allowed her to display her wide-ranging athleticism. About 5 foot 10 and 150 pounds, she had the build for the strength events, while still being lithe enough for the sprints. At the 1984 Olympics she finished a close second; if she had run the last event, the 800 meters, just a third of a second faster, she would have won.4 She would not lose again for almost a dozen years.
Over that period she won gold medals in Seoul in 1988 and Barcelona in 1992 and four world championships.5 She was the first woman to score more than 7,000 points; her world record of 7,291 still stands. Sometimes she won all seven elements. In 1986 Joyner-Kersee won the Sullivan Award as the country’s top amateur and also won the Jesse Owens Award in both 1986 and 1987 as the best track athlete. In 2001 she was named the top collegiate athlete of the previous 25 years.6 Her six Olympic medals tie her with Allyson Felix for the most ever by an American woman in track-and-field. In recognition of all this, in 2013 the award for best female track athlete was renamed after her.7
Jackie Joyner-Kersee was part of the first generation of women athletes to benefit fully from the forces changing sports.8 Thanks to Title IX, there were athletic scholarships available that allowed her to build her skills. And beginning in 1981 the rules governing amateurism softened; this meant that after graduating from UCLA, she could support herself through sponsorships and endorsements and thus extend her career. Her last major event was in 2000; the shoe pictured here dates from 1993.
Light and composed of man-made fibers, it’s interesting to compare it to the leather one that Mae Faggs wore (see 1952 entry on the Tigerbelles) or even to the first modern American track shoe (see 1974 entry on the Nike Waffle Trainer).
Since leaving competition, among other things, Joyner-Kersee has founded a community center in East St. Louis9 that offers many of the same kinds of programs from which she benefited so much. And her first coach, Nino Fennoy, has led East St. Louis high school girls to 16 state championships.10 Perhaps somewhere among them is the next first lady of US track.
1995/2009
GENO AURIEMMA’S FIRST CHAMPIONSHIP TROPHY AND PAT SUMMITT’S 1,000TH VICTORY BALL
F
or 12 years, it was a wonderfully nasty rivalry. When the women of the Universities of Connecticut and Tennessee played basketball against each other, the stakes were always high, the games often close.
A few months after their first nationally televised clash in January 1995, the UConn Huskies brushed the Lady Vols aside, 70–64, to complete an undefeated season and win the team’s first national championship (see trophy on page 253). The following year Tennessee got a most satisfying revenge, beating UConn in overtime in the Final Four, in one of the greatest games in hoops history. Four times, the teams met in the National Collegiate Athletic Association final; four times, UConn won.
Then there were the coaches, two of the legends of the sport. Geno Auriemma of UConn and Pat Summitt of Tennessee began as relatively friendly rivals but became relatively unfriendly ones, to the extent that in 2007, Summitt canceled all regular-season games. She suspected UConn was committing recruiting violations;1 she was also upset that Auriemma had called her Lady Vols the “evil empire.” Since then, the teams have only met in the NCAA tournament.
Of course the two coaches had a great deal in common. “We both lived the game,” Summitt would write, “as if it was in us on a cellular level.”2 Both were brilliant, intense, dedicated coaches who did not tolerate fools, bad officiating, stupid questions, or sloppy play. They both had exemplary records when it came to the academic accomplishments of their players. Both were natural alphas, not inclined to cede, or share, status as leaders of the sport.
Born on a family farm in rural Tennessee, Pat Head Summitt was raised on a diet of hard work and rectitude. Her career covered the entire modern history of women’s hoops. She grew up with the six-player, half-court game in high school and played in the first national college tournament as well as on the first US Olympic team, which won the silver medal in 1976.
Hired to coach Tennessee in 1974, she was paid $250, and also did the laundry and drove the team van. To pay for new uniforms, the team sold doughnuts.3 But Summitt began to build a reputation; in 1984, she coached the US Olympic team to a gold medal. By then she had perfected The Stare, a piercing look that was intimidating even to see on television.
In her 38-year tenure as head coach (1974–2012), Summitt never had a losing season. Her teams made every NCAA tournament, winning the national championship six times between 1987 and 1998—including the last three in a row. She began to be referred to as the “John Wooden of women’s basketball”; Wooden won 10 titles with UCLA from 1964 to 1975. In 2009 Summitt became the first Division I hoops coach to win 1,000 games, an achievement commemorated with this ball. By the time she retired in 2012, she had won more games (1,098) than any other basketball coach, male or female; gone to 18 Final Fours; and won eight titles. From 1985 to February 2016, the Lady Vols were never out of the top 25—a record 565 weeks.4 In 2000 she was named the Naismith Women’s Collegiate Coach of the Century; John Wooden won the award for men.
Auriemma comes from a very different background. He emigrated from Italy as a boy and grew up outside Philadelphia. His first college coaching job was as an assistant for the women’s team at St Joseph’s in Philadelphia; his sense of humor is spiked with distinctly Philly barbs. He also worked at Immaculata coach Cathy Rush’s (see 1972 Immaculata Mighty Macs entry) summer hoops camps;5 Rush helped him get the post as an assistant at the elite University of Virginia women’s program. After four years he was ready to run his own show and became head coach at UConn in 1985. Those Huskies were a hangdog bunch, with only one winning season in 11 years. By 1991 they were in the Final Four, and in 1995 they won it all. Huskies teams have gone undefeated five more times under Auriemma, at one point winning 90 games in a row, breaking Wooden’s record of 88.6 He also coached the American women to gold in the 2012 Olympics. In 2016, the Huskies won their fourth title in a row, and eleventh under Auriemma, breaking Wooden’s record, and appear primed to keep going.
UConn–Tennessee is the only women’s team rivalry that has approached anything like the intensity and interest characteristic of those in the men’s game, such as Duke–North Carolina. Summitt herself referred to it as an “absolute masterpiece.”7 Like any great rivalry, UConn and Tennessee pushed each other, and the game itself, to greater excellence. From 1995 to 2010, one or the other won 12 of 16 NCAA championships, and all but two Final Fours over that period featured at least one of them.
The contest has lost some of its edge because UConn has been so dominant of late; Tennessee hasn’t won since 2008. The renewed friendship between Auriemma and Summitt has also taken the venom out of it. But the most important reason it will never be the same is that Summitt will not be part of it. In 2011, at age 59, she was diagnosed with early onset Alzheimer’s. She retired after that season—a personal tragedy and a terrible loss to the game she loved. When she died in June 2016, all of basketball—and all of Tennessee—mourned. Pat Summitt was, literally, a game-changer.
1998
PIECE OF FLOOR FROM MICHAEL JORDAN’S “LAST SHOT” WITH THE BULLS
A
s a freshman at North Carolina, Michael Jordan sank the game-winning shot in the 1982 National Collegiate Athletic Association finals. As an Olympian, he won gold medals in 1984 and 1992. As a professional, he won six titles with the Chicago Bulls. As an individual, he won the Rookie of the Year, five most valuable player awards, and ten scoring titles. And as a commercial icon, he’s unmatched.
Even so, failure is a crucial element of the Jordan story—something confirmed by the man himself. “I’ve missed more than 9,000 shots in my career,” he said in a 1998 commercial. “I’ve lost almost 300 games. Twenty-six times I’ve been trusted to take the game-winning shot and missed. I’ve failed over and over and over again in my life.”1
Jordan hated failure, but he never feared it, which is why he was always in position to take the final shot and wanted to. And he channeled the sting of failure into greatness. In his induction speech into the Basketball Hall of Fame, he reminded the coach who didn’t name him to the high school varsity, “You made a mistake, dude.”2 In the National Basketball Association, his rivals noticed how he improved his jump shot, ball-handling, and defense.3
Jordan’s commercial success, too, was built on a willingness to fail. In his first contract with Nike, he insisted on having his own brand. If it was a dud, it would be a personal and highly public dud. The first Air Jordan rang up $130 million in sales in 1985.4 That put Nike, which had been struggling, back on track. Now in its thirtieth edition,5 the brand continues to dominate the sneaker market.
The best advertising, however, was on the court. Jordan was wearing the first Air Jordans when he scored 63 points in the playoffs against one of the NBA’s greatest teams, the 1986 Celtics. The Bulls lost in double-overtime, but Larry Bird was so awed he called his rival “God disguised as Michael Jordan.”6 After six years when he played brilliantly but his team always failed to win the big one, the Bulls broke through in 1991 for the first of three consecutive titles.
The one failure Jordan did not convert into success was on the baseball field. He retired from the NBA in October 1993—a year marked by the murder of his father, James—to try to make it to the majors. With the AA Birmingham Barons, though, he hit only .202 in 127 games7 and committed 11 errors in the outfield. He returned to the Bulls in March 1995. In his fifth game back, he scored 55 points. With Jordan fully committed again, Chicago won three straight championships, in 1996, 1997, and 1998, matching the “three-peat” that had preceded his baseball interlude.
His greatest moment with the Bulls was his last. Chicago was up three games to two in the 1998 finals, but was playing in front of a tough Utah Jazz crowd. Due to injuries and illness, the Bulls were not at full strength, and Jordan had carried the team. Late in the fourth quarter, though, his exhaustion showed; at one point, he missed four straight jumpers.8
With 41.9 seconds left, the Jazz were up 86–83, after a three-pointer from the estimable John Stockton. In less than five seconds, Jordan drove to the basket and scored, to close the gap to one. Stockton took the ball back up court; with about 16 seconds left, he passed inside to Karl Malone. From behind, Jordan swatted the ball out of Malone’s hands, recovered it, and dribbled up court with complete assurance. At 10 seconds, Jordan made his move, driving right, nudging Utah’s Byron Russell left, then stepping back via a lovely crossover dribble to create space. From the top of the key, he paused, set, and with perfect form, hit the game-winner. (A fragment of that floor is seen on the previous page.) The Bulls had their sixth title in eight years.
Seven months later, in January 1999, Jordan announced his retirement. It felt right. His last shot was the perfect ending to an epic career, and most of the key members of the team, including coach Phil Jackson, would not be back for the 1998–1999 season. But Jordan couldn’t stay away. He returned to play for two frustrating years with the Washington Wizards in 2001–2003. Then he retired for good.
Jordan did much more than rack up trophies. He changed the NBA. He joined the league at precisely the right time. The advent of Larry Bird and Magic Johnson had lifted the league from the depths of near-irrelevance (see 1979 entry on them). But the combination of Jordan, new leadership, and new technology launched the NBA to new heights.
The new leader was David Stern, who became commissioner in 1984. He had a good reputation with the players and helped to negotiate an antidrug policy and a revenue-sharing structure in 1983; these agreements stabilized the league’s finances and improved its reputation.9 As commissioner, Stern was brilliant in leveraging new technology, meaning new kinds of television. Cable TV, particularly the emergence of ESPN and regional networks, ensured more complete American coverage and added a generous new source of revenues. Satellite technology enabled the game to find new fans overseas. In 1986 people in about three dozen countries could see NBA games; a decade later, that number had increased to 175.10 For much of this global audience, Jordan was the NBA. Stern would always remember being in China in 1990 when a local guide earnestly told him how much she liked the “red oxen.”11
“Michael changed the world,” concluded Bill Walton, the former UCLA and NBA star. Like many things said about Jordan, that seems over the top. But he was the greatest player ever, and is still the most famous. It’s quite a legacy, and the ultimate affirmation of his belief that his willingness to try, and fail, “is why I succeed.”12
1999
BALL FROM THE 1999 WOMEN’S WORLD CUP
I
n a familiar scene, there were 90,000 roaring fans in the Rose Bowl, and across the country, 40 million people scarfed down nachos and wings as they cheered their team in the big game. But there was one big difference: the game that provoked all this—the final of the 1999 World Cup—was being played by women.
The US women’s team was a soccer power. Led by a veteran core, it had won the first World Cup in 1991 and finished third in 1995. Posters of Michelle Akers, Julie Foudy, Mia Hamm, Kristine Lilly, Carla Overbeck, and Brianna Scurry were on the bedroom walls of girls all over the country.
Then they won the Olympics in 1996, beating reigning World Cup champion Norway in overtime to reach the final. But television barely covered the women’s games, only managing to air limited tape-delayed highlights. In the stirring final against a resolute Chinese side, the United States won 2–1 before more than 76,000 enthralled fans,1 and the players emerged as genuine stars, even though television managed only to air a few minutes of that, too.2
Olympic gold was the perfect preparation for the 1999 World Cup. When the United States had won the right to host the event, the leaders of FIFA, the international soccer federation, thought to make it a cozy little affair, clustered in second-tier venues on the East Coast.3 Maybe the games could get a little coverage at the far end of the cable spectrum. This was largely the same FIFA that had insisted that the women play 40-minute halves during the 1991 World Cup; 45 minutes, they decreed, would be too much for the gals.4
The American players, however, were not interested in hosting a mini-me World Cup, and they forced people to listen. The members of the national team were an appealing bunch, lacking the criminal records and sense of entitlement that filled too many sports pages. It didn’t hurt one bit that they were attractive—“babe city” in David Letterman’s words. Many had been playing together for years, often under difficult conditions, and there was a palpable cohesiveness and team-first spirit. At the same time, they were brimming with competitive fire. The team’s sports psychologist remembered that blood spilled, literally, during a game that required them to grab spoons or be eliminated.5 Put it all together, and the result was a group portrait of pleasing hues. Not unlike the pioneers of the women’s tennis tour, the soccer players worked hard to promote their sport, giving clinics and signing autographs. That helped to build grassroots enthusiasm for the Cup.
As a result of the charm offensive, it gradually began to occur to the powers that be that while this soccer team was discernibly female, which was too bad, it might be worth a little more attention. After all, the US women actually won things, something that could not be said of their male counterparts. And with 7.5 million American girls registered to play,6 maybe those 10,000-seat venues were a little unambitious. So soccer went for it, booking the Boston, Chicago, New York, and Washington pro football stadiums, plus the Stanford stadium and smaller venues in Portland and San Jose. The goal was to sell 312,000 tickets, a guesstimate that was wildly wrong. Half a million tickets were gone before the first kick.7
The tournament became a phenomenon, and this wasn’t just an American show. Early-round matches like Mexico–Italy attracted more than 50,000 spectators; even North Korea–Denmark sold out. Best of all, the US women played with the verve and charisma they had been displaying for years, beating a strong German squad 3–2 in the quarterfinals. Against Brazil in the semis, goalkeeper Scurry had her best game of the tournament, keeping an aggressive offensive team out of the goal in a 2–0 victory.
By the time of the final against China on July 10, World Cup fever had spread; even the most macho sports bars tuned in to the final. There have been better games. The action clustered around midfield, as both teams played tightly, afraid to make a blunder. China’s goalkeeper had to make only four saves in regulation; Scurry made two. But there was also intensity, some superb passing—and one heart-stopping moment in extra time, when China connected on a header that defender Kristine Lilly just barely managed to kick away. After 120 minutes, the score was 0–0.
So it would come down to penalty kicks. The first four kicks, two from each team, were good. On the fifth kick, though, Scurry anticipated Liu Ying’s intention and made an excellent save low and to her left. If the final three Americans could convert, the World Cup would be theirs. The Chinese did not make it easy, with their last two players both finding the back of the net. It came down to Brandi Chastain. She lined up slightly to the right of the ball, took four short steps, stroked through with her left, and lined a shot high into the right corner. Unstoppable. Goal, match, and Cup to the Americans. Plus, the unforgettable image of Chastain ripping off her team jersey, revealing a black sports bra and stellar abs.
Signed by most of the US team, the slightly scuffed ball pictured here was used in that memorable World Cup. The images are of the places where games were held: the Statue of Liberty for New York; the Capitol for Washington, DC; a circuit map for San Jose, the heart of Silicon Valley. It is an all-American artifact for an all-American triumph, both on and off the field.
While the US women’s team has continued to excel in international competition, winning Olympic gold medals in 2004, 2008, and 2012, and another World Cup title in 2015, that performance has not translated into professional success. A women’s pro soccer league kicked off in 2001 but closed after three seasons and $100 million in losses.8 Another began in 2009 and folded in 2012; a third is hanging in there.9 Whether the problem is that the athletes are female, or that the game is soccer, or that the sports calendar is crowded, or that this is just one of those things Americans only care about every few years, women’s pro soccer hasn’t found its footing. But the girls of summer, circa 1999, gave American fans the most remarkable three weeks in the history of women’s team sports. So far.
2000
TIGER WOODS’S SCORECARDS FROM THE US OPEN
I
n his finest performance in his best year, Tiger Woods won the US Open at Pebble Beach by 15 shots—the largest margin of victory in a major.1 Shooting 65–69–71–67, as these scorecards show, he led from start to finish and tied the Open scoring record at 272.2 When Woods won the British Open the following month—setting a course record at 19 under par—he became, at age 24, the youngest man to win all four majors. And when he followed that with the PGA and then the 2001 Masters, he owned all four modern major trophies at the same time. Over the four tournaments, he showed not only skill but true grit. At the PGA, he had to sink a six-footer on the last hole to force a playoff. At the Masters, the back nine was a nail-biter until he pulled ahead for good on the sixteenth.
It was not a Grand Slam, because Woods did not win all four within a calendar year. But the Tiger Slam is still the most remarkable achievement in modern golf. Only Ben Hogan, in 1953, won as many as three majors in a row. Woods’s record in 2000 might be the most dominant ever. He entered 20 events, won 9, and finished in the top three 14 times.
Remarkable, but not surprising. Tiger Woods had been on golf’s radar for almost his entire life. As a toddler he showed off his form on the Mike Douglas Show, where he also managed to upstage Bob Hope. Woods won three Junior Amateurs and three US Amateurs. At age 16 he became the youngest man to compete in a tour event. Like the man he has spent his life chasing, Jack Nicklaus, Woods won his first major, the 1997 Masters, while still the reigning US Amateur champion.
That triumph had particular meaning, because the Masters tournament had been all-white until the year of Woods’s birth, 1975, when 40-year-old Lee Elder became the first black man to play in the tournament. Charlie Sifford and Pete Brown, despite two tour victories each, were never invited. Bobby Jones (see 1930 entry), the creator of the Masters and regarded as one of the great gentlemen of the game, was, for better and in this case, for worse, a man of his place and time; he never addressed this injustice before his death in 1971. The club itself didn’t have a black member until 1990.
The Masters was hardly unique in its attitude. From 1934 to 1961 the bylaws of the Professional Golfers Association of America restricted membership to “professional golfers of the Caucasian race.” Teddy Rhodes had to sue to earn his way into the US Open in 1948.
Woods had teed up at Augusta as an amateur in 1995 and 1996, when he played a practice round with Jack Nicklaus and Arnold Palmer. But 1997 was his first major as a professional. Initially he looked rattled, shooting a 40 on the front nine. Then he settled down and played some of the most dominating golf ever seen, shooting a 30 on the back nine and then 66–65–69 to finish at 18 under (a course record), 12 shots ahead (another record) at age 21 years, 3 months (ditto). A student of the history of the game, Woods acknowledged those who came before him. “I wasn’t the pioneer,” he said. “Charlie Sifford, Lee Elder, Ted Rhodes: those are the guys who paved the way.”3 Nicklaus marveled at what he saw: “He’s taking the course apart.”4
From 1997 through 2010, Woods dominated the game, ending every year ranked first (11 times) or second (3 times).5 He also made a record 142 consecutive cuts.6 In 2008 he won his fourteenth and bravest major, taking the US Open on the nineteenth playoff hole while in pain from a balky knee.
At that point, Woods was 34 and looked likely to surpass Nicklaus’s record of 18 major triumphs. Although he was Player of the Year in both 2009 and 2013, he hasn’t won a major since that Open. Personal troubles and injuries have taken a toll, and the rise of young talent like Jason Day, Rory McIlroy, and Jordan Spieth has raised the degree of difficulty. Once the most intimidating figure in the sport, Woods ended the 2015 season ranked 416th.
But however Woods’s career plays out, he has created an enduring legacy. Handsome, personable, multiethnic, and supremely skilled, he brought a new audience to golf, and with it, more television coverage and a lot more money.
He also changed the game itself. In the wake of his 1997 victory, the mandarins of the Masters were not pleased at how ordinary Woods made their precious course look. They immediately set about “Tiger-proofing” Augusta, lengthening numerous holes and adding rough, bunkers, and trees all over;7 the course is 500 yards longer now than it was in 1997.8 Other courses have done the same.
This points to an issue that golf has not yet come to grips with. Balls fly farther. Clubs have improved. Players are fitter and stronger. In 1996 the longest driver, John Daly, averaged 288.8 yards9 off the tee, a figure that would no longer rank in the top 100. Some of the world’s greatest courses, designed in a different era, could be overwhelmed by the modern power game. And yet who wants a world in which St. Andrews and Merion could become obsolete?10
2000
LANCE ARMSTRONG’S BIKE FROM THE TOUR DE FRANCE
“Everybody wants to know what I’m
on. What am I on? I’m on my bike, busting my ass six hours a day.”1
“I didn’t take performance-enhancing drugs, I didn’t ask anyone else to take
them and I didn’t condone or encourage anyone else to take them.”2
“Anyone who thought I would go through four cycles of chemo
just to risk my life by taking EPO was crazy.”3
“How many times do I have to say it? It can’t be any clearer
than ‘I’ve never taken drugs.’ How clear is that?”4
All lies, of course. According to the US Anti-Doping Agency’s report released in October 2012, Lance Armstrong led “a massive team doping scheme, more extensive than any previously revealed in professional sports history.”5 He was banned from competition for life and stripped of his seven Tour de France titles. Once a secular saint who hung out with U2’s Bono, dated Sheryl Crow, and chatted with presidents, he was named the most disliked athlete in America in 2013.6
Armstrong’s fall was extreme because he plummetted from such a height. After he claimed his first of seven straight Tour titles in 1999, less than three years after being diagnosed with testicular cancer, the New York Times lauded his “resoundingly positive image” for an event that had been so decimated the year before with doping arrests that it became known as the “Tour de Dopage.” When Sports Illustrated named him Sportsman of the Year in 2002, the magazine noted that much of the public saw Armstrong as “more than an athlete. He’s become a kind of hope machine.”7 Two years later, a poll named him “the best sports role model of the last 50 years.”8
In 2000 he used this bike, a Trek 5500, on some of the flatter stages of the Tour, including the final run into Paris; made mostly of carbon fiber, it weighs less than four pounds.9 Armstrong eventually finished more than six minutes ahead of Jan Ullrich of Germany. His second victory in a row, it secured his status as a truly great cyclist and seemed to confirm that doping was not essential to winning.
That was how it seemed at the time. In fact, Armstrong had been doping since the mid-1990s.10 Even after the dam of lies broke, he could not bring himself to face the consequences of his actions. His defense, as told to Oprah Winfrey in January 2014: “All 200 guys that started the race broke the rules.”
He had a point. Cycling in this era was drenched in drugs; every Tour winner but one from 1996 to 2010 has been implicated.11 Armstrong also benefited from the see-no-evil mentality of the authorities, a blindness encouraged not only by the money Armstrong brought to the sport, but perhaps also by his donations to the official anti-doping program. As late as 2011, Hein Verbruggen, the president of the international cycling federation, was adamant: “Lance Armstrong has never used doping. Never, never, never.”12
What made Armstrong so toxic was not just the doping and the lying, but the malevolence with which he defended himself. Given that he had doped for years, why did he go out of his way to ruin those who had said he did? David Walsh, a Sunday Times (of London) reporter, was vilified as a “little troll”13 and sued for libel. Cyclist Christophe Bassons was bullied out of the sport. Betsy Andreu, wife of teammate Frankie Andreu, was ridiculed as a “crazy bitch” and an “ugly liar.” Emma O’Reilly, a former masseuse for his team, was tangled in lawsuits and referred to as a “whore” with a drinking problem who lied for money.14 And then there’s the vicious campaign against Greg LeMond, now the only American winner of the Tour de France (1986, 1989, 1990). LeMond was forced to apologize15 and saw his business interests in cycling undermined16 when he questioned Armstrong’s relationship with Michele Ferrari, an Italian doctor17 convicted of sporting fraud in 2001.18 Ferrari was so notorious he became known as “Dr. Evil.”19 As Betsy Andreu put it, “The doping is bad, but Lance’s abuse of power is worse.”20
At the same time Armstrong was wrecking these people’s lives, he was also working hard to raise money for his cancer foundation, Livestrong; many cancer patients have called him an inspiration. “He saved my life because he started Livestrong,” said cancer survivor Laurey Matson, “and he helped a lot of other people all over the world.”21 Such a judgment cannot be dismissed. Moreover, Armstrong appears to have been humbled by events. No longer the crass bully he was in his prime,22 he has apologized, publicly and privately, to many of the people he harmed. But none of this can excuse the astonishing breadth and depth of his deceit as a sportsman.
2003
YAO MING BOBBLEHEAD
Y
ao Ming was literally born to be a basketball player. Chinese Communist Party officials coaxed his parents—both of them tall and accomplished hoopsters—to marry, with the explicit intention of raising an athlete who would make his country proud.1 Improbably, the strategy worked. In 2002 the Houston Rockets made the 22-year-old Shanghai Sharks center their top pick, the first time a number 1 draft choice had come from overseas. “Houston, I am come!” Yao announced.2
He came, saw, and conquered. Texans instantly took to the gentle giant with the soft shooting touch and shy smile. This bobblehead hints at the affection—such promotions are given to successful, likeable players, not surly benchwarmers. (And appropriately, it was made in China.)
In 2003 Yao became the first Chinese player to make the National Basketball Association all-star team,3 and over an eight-year career, he averaged 19 points and 9.2 rebounds a game.4 In a larger sense, however—and everything about the 7-foot-6 Yao has to be seen on a larger scale—he was the human face of the NBA’s commitment to globalize the game.
China and the NBA were a natural fit, even though it took a century to make the match. James Naismith (see 1891 entry) created the game at the YMCA’s international training school; Christian missionaries took the game with them as they fanned out around the world. Beginning in the 1890s, the game sank deep roots in China. “You can just feel what the game means to them,” a Christian missionary wrote to Naismith in the 1930s. “It cannot be described or pictured; it cannot be told; it must be seen.”5 To this day, a visitor can venture to the dustiest backwater, and there will likely be nets. Even during the Cultural Revolution, in which foreign influences were ruthlessly squashed, hoops kept going.6 The game is that imbedded into the culture.7
After the NBA began regular broadcasts to China, Chinese youths took to American hoops in a big way, and Yao became the nation’s most popular athlete. Apple, Gatorade, McDonald’s, Toyota, Visa, and other global companies signed him up. His first all-star game would be available to half the world’s population.8 Outside the Olympics and the World Cup, no sport had ever had that kind of reach. In 2010, a year before Yao retired due to chronic ankle and foot injuries, Sports Illustrated called him the “most influential NBA player since Michael Jordan.”9
The story of Yao, then, is the story of globalization. Or one of them; different sports are taking different routes. The NBA remains popular in China, but few Chinese players have made it to the show—six total and none in 2016. An overambitious (and politically naïve) plan to start a mini-NBA in China fell flat. But on opening day of the 2015–2016 season, there were more than 100 foreign-born players on NBA rosters, from countries as diverse as Argentina, Bosnia and Herzegovina, Cameroon, the Dominican Republic, Israel, Russia, Senegal, Sweden, and Turkey.10 That is 20 times as many players as in 1993. Dozens of Americans have also played in the Chinese Basketball Association.
Other American sports leagues have embraced globalization. Unlike the case of China, though, this has more to do with growing the talent pool than expanding the audience. In 1975, 7 percent of Major League Baseball players were born outside the United States; by opening day in 2015, that figure was 26.5 percent.11 Most are from Latin America, but Australia, Brazil, Japan, the Netherlands, and South Korea are all represented as well. Nearly half of minor leaguers are foreign born.
Hockey, too, has outgrown its North American roots. As recently as the 1970s, more than 90 percent of NHL players were born in Canada; now about half are.12 Players from the Czech Republic, Finland, Russia, Slovakia, and Sweden comprise a quarter of NHL rosters.13
2004
CURT SCHILLING’S BLOODY SOCK
I
n truth, it wasn’t much of a rivalry. From 1921, when the New York Yankees won their first World Series, through 2003, the Bronx Bombers had won 26 World Series. The Red Sox’s tally over that period was zero; their last championship was in 1918, when Babe Ruth was their star pitcher (see 1932 entry). The Sox traded Ruth before the 1920 season, a gruesome mistake that their fans would later deem the “Curse of the Bambino.” Rarely were the two teams good at the same time, and when they were, as in 1949, 1978, and 1999, New York always won the big game. This was a rivalry in the way that a nail is a rival to a hammer: it may be a very good nail, but its destiny is to get pounded.
So in 2003, when the American League championship series went to a seventh game, only the route to the usual result was different. Boston manager Grady Little left a tired Pedro Martinez in the game. The Yankees tied the score in the eighth and won it in the eleventh. That off-season, the Red Sox added starting pitcher Curt Schilling, a brilliant postseason performer, to the roster, as well as adding relief depth. The Yankees signed Alex Rodriguez.
In 2004 the rivalry was for real, as both teams were expected to contend. Scalpers got improbable prices even for spring-training games. But for a three-month stretch, the Sox were mired in mediocrity. After a 13-inning thriller on July 1, when Yankee shortstop Derek Jeter sacrificed his face to catch a foul ball (while his Boston counterpart, Nomar Garciaparra, sat on the bench, seemingly uninterested in the goings-on),1 the Sox were 8.5 games back.2 It looked like wait ’til next year yet again. Instead, the Sox shed former fan favorite Garciaparra at the trading deadline to improve clubhouse chemistry and defense, and the team began to gel. Boston went 42–18 from August on3 and swept the Angels in the division series.
And then, just as abruptly, the momentum stopped. The Sox dropped their first three games to the Yankees in the league championship series, capped by a 19–8 shellacking. Boston mourned. “Soon it will be over,” wrote Bob Ryan of the Boston Globe, “and we will spend another dreary winter lamenting this and lamenting that.”4 One likely lament: the injury to Schilling, who had torn the sheath on a tendon in his right ankle against the Angels. Not that Schilling’s ankle was likely to matter. No team in baseball history had ever come back from 3–0 to win a postseason series; only two had even forced a sixth game.
In the ninth inning of Game 4, the Yankees had a one-run lead with the incomparable Mariano Rivera on the mound. Uncharacteristically, Rivera walked the lead-off batter. Then pinch-runner Dave Roberts stole second base. He scored on a single to tie the game. The Sox had life. More important, they had David Ortiz, who won the game with a twelfth-inning home run. In a thrilling Game 5, Ortiz came through again, this time singling home the winning run in the bottom of the fourteenth.
With the Yankees now ahead three games to two, it was Schilling’s turn to play the hero. But in order to pitch, he needed to be able to push off his damaged right leg. Red Sox physicians thought they could create a kind of sling on which the tendon could rest; nothing like this had been tried before, so they experimented on a cadaver.5 When the procedure appeared to work, they brought the idea to Schilling. He agreed to try. The doctors applied a local anesthetic, incised the pitcher’s right ankle, and created what was essentially an artificial sheath.6
As Schilling warmed up before Game 6, the sutures began to tear, and blood seeped through the white stocking. But the Yankees did little to add to Schilling’s discomfort, getting only four hits off him—and never, curiously, trying to bunt. “I mean, if you’re looking at a wounded duck,” mused Sox infielder Kevin Millar, “why not?”7 Boston won 4–2, to tie the series, and took the decisive seventh game 10–3. The Sox went on to sweep an excellent Cardinals team to win Boston’s first World Series since 1918. The “Curse of the Bambino” was truly and completely dead. The Sox would also win the Series in 2007 and 2013.
Facing financial difficulties8 after his video game company went under,9 Schilling sold the bloody sock at auction in 2013.10 But while the sock itself has disappeared into private hands, the “bloody sock game” will endure forever. This red badge of courage stands for the 2004 Red Sox’s defining qualities—grit and a willingness to take calculated risks—that added up to the greatest comeback in sports history.
2005
FORREST GRIFFIN’S GLOVES FROM THE ULTIMATE FIGHTER 1
T
he premise of the first Ultimate Fighting Championship (UFC) in 1993 was simple: match a specialist in one martial art—say, karate—against an expert in another, such as sumo. Then rake in the profits by selling the action on pay-per-view television. When one fighter kicked a sumo wrestler, spraying two teeth into the crowd and imbedding two into his feet, a sport was born. Okay, it wasn’t a sport then; it was a spectacle. But it certainly is a sport now.
To clarify: UFC is a trademark. The sport is mixed martial arts (MMA). But UFC has become almost synonymous with its creation, having bought out most of its rivals. For those who don’t like a helping of pain with their competition, MMA can be hard to watch. But it is not all about the blood. MMA fighters are remarkable athletes, needing strength, speed, stamina, flexibility, technique, and courage.
In regular matches, the contestants fight three rounds of three minutes each, though few run that long. More often, a contestant surrenders by tapping the mat. Championships can go five rounds. The barefoot fighters employ a range of techniques, including boxing, karate, jiu-jitsu (particularly the Brazilian variety), Muay Thai, savate, sumo, and wrestling. They compete in a 750-square-foot, octagon-shaped ring, with a six-foot, padded, chain-link fence. Three judges score each fight, and a referee keeps things semiorderly.1
In the early days almost anything went, except eye gouging and below-the-belt shots. There were no weight classes and no time limits. When US senator John McCain referred to MMA as “human cockfighting,”2 in 1996, it was difficult to disagree.
And he was not alone. Though MMA found an audience from the start, critics did not have to look far to find evidence to stoke their sense of outrage. Here is how one fighter described his work: “I was hitting him to the brain stem, which is a killing blow. And when he covered up, I swing back, with upswings, to the eye sockets, with two knuckles and a thumb.”3 Dozens of states banned MMA; many cable stations refused to air the fights.
By 2000 the UFC had begun to rein in its blood-and-gore image, implementing weight classes, toning down the to-the-death marketing, and outlawing practices such as “fishhooking”4—plunging a finger into any orifice and pulling. That year New Jersey decided to allow MMA, imposing regulations that remain the basis of current standards.5 There would be no more flying teeth; fighters must wear mouth guards.
But the founders were sick of it all; in 2001 they sold the rights to the name and everything else for $2 million to Zuffa (Italian for “fight”), a company established for the purpose by casino moguls Frank and Lorenzo Fertitta (81 percent), and Dana White, who managed a couple of UFC fighters. Frank Fertitta became the low-profile CEO, White the extremely high-profile president.
The new owners saw two important things. First, MMA was shabby, operating in sad, badly lit venues that no celebrity would set foot in. And second, it had to grow up. “No-holds-barred” cage fighting appealed to a certain subset of young men, but it was never going to impress state regulators. Lorenzo Fertitta could attest to that; he was a member of the Nevada State Athletic Commission.
White set about improving operations, while the Fertittas kept paying the bills as they began to make their case to state athletic commissions. Nevada signed up, and with that, the cable operators came back.6 UFC40, in 2002, attracted a healthy 150,000 pay-per-view purchases and a good deal of media attention.
It was a big step toward recovery, but MMA was not quite off the mat. Noticing the popularity of reality TV, the Fertittas pitched a series to Spike TV, a new cable channel directed at young men. UFC would pay all the costs of the first season of what became The Ultimate Fighter (TUF); each episode would not only showcase the usual too-many-people-in-one-house nastiness, but also an MMA fight. The last men standing would fight a final. Spike leaped at the idea of getting free content, and TUF became a hit. When fighter Forrest Griffin faced off against Stephan Bonnar on April 9, 2005, there was a built-in audience for the first live broadcast of a UFC fight.
It was an instant classic. Going three full rounds, the fighters displayed the brutal glory of MMA at its best. Before an audience of 2.6 million viewers, Griffin won on a 29–28 decision.
These are the gloves he used in the bout that is commonly referred to as “the fight that saved the UFC.” The circle was complete. Begun as a made-for-pay-TV exhibition, UFC secured its future because it became a made-for-reality-TV one.
The numbers watching pay-per-view fights soared on the back of TUF’s success, and MMA was well and truly launched. By 2008 it had joined the mainstream of American sports. And not just in the United States: The biggest bouts are broadcast to 140-plus countries, and MMA leagues exist on every continent except Antarctica. None of this would have happened without the critics, who forced UFC to change from a bloody spectacle to a real athletic contest. “We ran toward the regulation, not away from it,” White later said. “In a way, McCain created the sport. If he hadn’t pushed, it would not have become a sport now.”7 In March 2016, New York legalized MMA, making it legal in all 50 states; four months later, the Fertittas and White sold their interest in the sport for $4 billion.
And MMA is not just for men, either. Ronda Rousey, an Olympic bronze medalist in judo, became the first woman to sign with UFC in 2012.8 She rode her signature armbar move to the bantamweight title. In 2015 the previously undefeated Rousey was dethroned by boxer Holly Holm in an upset that confirmed that women’s fights could score, too.9 Then Holm lost a classic five-rounder to veteran Miesha “Cupcake” Tate, after leading most of the fight. Tate, whose background is in wrestling, has lost to Rousey twice before. So the plot thickens, to everyone’s profit.
Mixed martial art fights are not for everyone, and certainly not for those who cannot stand the sight of blood, but they are no longer anything like human cockfighting. In fact, the man behind that memorable phrase, John McCain, now says that if MMA had existed back in his days as a boxer at the Naval Academy, he would have tried it.
2007
MITCHELL REPORT
P
ity George Mitchell. The former senator from Maine had played a huge role in bringing something like peace to Northern Ireland in the late 1990s. But the terrorists on both sides of that hideous conflict were models of reason and cooperation compared to the challenge he took on in 2006: determining the nature and extent of performance-enhancing drug use in major league baseball.
As heads swelled and records fell in the 1990s and early 2000s, baseball’s drug culture went essentially unchallenged. This was a systemic failure. The players’ union was ferociously against testing for performance-enhancing drugs (PEDs), casting it entirely as a privacy issue, when it was a health, safety, and equity issue as well. Baseball executives were more than willing to look the other way as the game attempted to recover from the self-inflicted wound of the 1994 strike. The great majority of journalists contrived not to notice that acne—a common side effect of steroid use—was spreading across many a muscled torso. Not that the players would have talked, anyway; baseball’s code of omertà rivaled that of cycling (see 2000 entry on Lance Armstrong). The fans didn’t appear all that interested, either.
Even so, the evidence was accumulating. In 2000 there were several incidents in which drug paraphernalia was found in clubhouses.1 Former most valuable player Ken Caminiti admitted to his own steroid abuse in 20022 and suggested that as many as half the players were on something. Baseball did manage to institute an anonymous testing program the next year; of the 1,438 tested, 104 failed, a tribute not only to a certain lack of wit, but perhaps to a perception that the system was not to be taken seriously.3 Although feeble, this modest program can be seen in retrospect as the beginning of the end for steroids, or at least of open and rampant abuse. In 2004, a stronger system was instituted.
Still, the drip, drip, drip of suspicion and innuendo continued. A number of players had to testify to a grand jury about the notorious BALCO laboratory in the San Francisco Bay area, which was implicated in the convictions of numerous athletes, including track’s Marion Jones and cycling’s Tammy Thomas. The deeply unpopular former player Jose Canseco not only admitted to juicing, but implicated others in his books.4 There were numerous reports of players receiving PEDs through dodgy Internet pharmacies.5 In early 2006 two persuasive and well-researched books, Juicing the Game by Howard Bryant and Game of Shadows by Mark Fainaru-Wada and Lance Williams, made it obvious to all but the willfully blind that baseball had a problem.
Using steroids for sports was not only illegal under federal law, but had also been specifically prohibited by baseball since 1991.6 Maybe it was time to do something. Enter Mitchell, hired to conduct an independent investigation.
Mitchell had no subpoena power, and received little cooperation (only 68 of 500 former players contacted talked, for example, and only one active one). The union was unhelpful, advising the players not to speak with the senator.7 But he plodded ahead, and with the help of some clubhouse dealers, dropped this 409-page bombshell at the end of 2007.
Although the report fell well short of proof beyond a reasonable doubt in its specifics, the cumulative effect was damning. And it named almost 90 names, some of them big, including Barry Bonds, Roger Clemens, Mark McGwire, and Rafael Palmeiro—all of them once-certain Hall of Famers.8
The baseball establishment was naturally shocked and appalled at the Mitchell Report’s central finding: that “for more than a decade there has been widespread illegal use of anabolic steroids and other performance enhancing substances.”9 The American public decided to get outraged, too. In retrospect, however, the biggest impact of the Mitchell Report may have been on the players.
For years they had resisted action on PEDs. As in cycling, drugs thrived in baseball because of a sense that this was a closed culture that made its own rules. After Mitchell, the culture seemed to change. From that point on, few players have seemed willing to defend, even tacitly, the indefensible. What they knew better than anyone was that this was not a victimless crime. For every player who dopes to get that little extra pop to stay in the game—and that describes a large number of users—there is someone riding the bus in the minors not getting his chance.
Baseball will never be drug-free. In this, it is no different than any other segment of society. But in large part due to the Mitchell Report and the changes it provoked, PEDs can no longer be taken with impunity.
2008
MICHAEL PHELPS’S SWIM CAP
A
ssociated with prosperity and good fortune, the number 8 is auspicious in Chinese culture. So it is appropriate that the Beijing Olympics will always be associated with the number 8. The Games took place in 2008; the Opening Ceremonies started at 8:08 on August 8. And for the only time in Olympic history, an athlete won eight gold medals.
Beijing was not Michael Phelps’s first Olympics, nor his last. At age 15 in Sydney in 2000, he finished a strong fifth in the 200-meter butterfly and posted a stunning last 50 meters1 that hinted at something special. A few months later he became the youngest man to set a world swimming record. In Athens in 2004, he won eight medals (six gold and two bronze), tying the mark for the most in a single Olympics. In London in 2012, Phelps added four more golds and two silvers.
But nothing could compare to Beijing. Phelps had set an audacious goal: break Mark Spitz’s 1972 record of seven gold medals. To achieve it, he would have to race 17 times in nine days, in all four strokes.
He almost didn’t make it. Like any great drama, there were hairbreadth escapes. In the second event, the 4 x 100-meter freestyle relay, in which Phelps swam the first leg, the whole thing looked in jeopardy. Going into the last length, America’s fourth swimmer, Jason Lezak, was eight-tenths of a second2 behind.3 But as another great American athlete put it, it ain’t over ’til it’s over (see 1956 entry on Yogi Berra). This wasn’t over.
Lezak made a good turn, then swam near the lane line to draft off the French swimmer who was leading, Alain Bernard. With about 20 meters left, the American found another gear and began to close. He hit the wall in perfect form, his arm extended to the fullest extent—and out-touched Bernard by eight one-hundredths of a second.4 Lezak had turned in a miraculous 46.06 split, the fastest ever.5 Phelps had his second gold—and an image for the ages, as he let out a vein-popping bellow to the roof of the Water Cube.
A few days later Phelps had to make his own Hail Mary in the last 50 meters. The event was the 100-meter butterfly, the only individual event he was swimming in which he did not own the world record. Milorad Čavić, a Californian who represented Serbia, took a .62-second lead into the turn. Phelps, always known as a fearsome closer, narrowed the gap. With five meters left, though, he was still a little behind. But Čavić lifted his head6 as he glided to the finish. Phelps stayed in form and added one more stroke. It was just enough to win—by .01, or less than an inch. That was gold medal number 7. He collected his eighth the next day, in almost routine fashion, in the 4 x 100-meter medley relay, as the American team chopped 1.3 seconds off the world record. His final tally: eight for eight, with seven world records.
Phelps had a physique made for swimming: large feet, a long torso, and a 76-inch wingspan. “He did very well in the gene pool,” his longtime coach Bob Bowman liked to say.7 He loved to win and, perhaps more important, hated to lose. At the peak of his peak, he didn’t seem to mind the drudgery of looking at the bottom of a pool five hours a day, 365 days a year. He had that certain extra something that separates the sublime from the merely excellent.
But in other eras, he could have had all this and still fallen short, through bad luck and human error. Phelps benefited from the perfection of timing, in the form of the electronic touch pads that record the end of the race. These take the human element out of determining winners, and as American swimmer Lance Larson could attest, that can make all the difference.
At the 1960 games, Larson and Australian John Devitt finished 1–2 in the 100-meter freestyle. But who was first? Devitt thought it was Larson, and graciously congratulated him. Larson also thought he had won, and happily splashed around the pool. The three timers assigned to each lane consulted their scorecards; they split 3–3 on the finish. Then they looked at their stopwatches. The three timers for Davitt recorded the identical time: 55.2, and the three for Larson had him at 55.0, 55.1, and 55.1. The electronic timer, used as a backup, also had Larson ahead.
At this point, the chief judge, Henry Runströmmer, stepped in.8 Under Olympic rules, he had no standing in the matter. Nevertheless, he declared that Larson swam 55.2 and that Devitt had won. He came to this conclusion, he said, because he had seen the finish. True enough, but he was five lanes away, and at an angle.9 The Americans appealed for years, but the result stood.
Better timekeeping technology became a priority after that. Electronic touch pads got their first Olympic workout in Tokyo in 196410 and became standard in 1968.11 When Phelps out-touched Čavić, he had precise timing devices and underwater cameras to make sure it held up.
As of the end of 2015, Phelps has twice as many gold medals (18) as any other athlete. His 22 total medals are 10 more than any other man has.12 The numbers deliver a definitive judgment: he is the greatest Olympian of all time.
2009
VENUS WILLIAMS’S DRESS AND SERENA WILLIAMS’S SHOES FROM WIMBLEDON
I
ndividually, each is great. Together, they are the finest sibling act in any sport, at any time, on any planet.
Venus, the elder by 15 months, was the first to greatness, winning both Wimbledon and the US Open in 2000 and 2001, and the Olympic gold in singles in 2000. In mid-2002 she was the top-ranked player in the world—until Serena toppled her. Venus won three more Wimbledons after Serena’s ascendency, giving her a total of seven majors (and seven runners-up). Since the open era began in 1968, only seven women have won more.
Serena, however, has made her sister’s achievements look almost pedestrian. As of July 2016, she has won 22 majors—tied with Steffi Graf for the most in the open era—and at the tennis old age of 34, she looks like she could keep racking them up for years. Given that the game is more globalized than ever, it’s hard not to conclude that she is the greatest player ever. Venus would be in anyone’s top 20.1
In 2001, when Venus met Serena in the finals of the US Open, it was the first time siblings had competed for a US Open title, and the first time since 1884 in the final of any kind of Slam.2 The match was broadcast nationally in prime time, also a first. Venus won in straight sets. Afterward, she sounded ambivalent. “I always want Serena to win,” she said. “I’m the bigger sister. I’m the one who takes care of her. I make sure she has everything even if I don’t. I love her. It’s hard.” For the reserved Venus, this was as expressive as anything she ever said publicly, and was obviously heartfelt.
And then the two went on a binge, meeting in the finals of five of the next seven majors, including four in a row. Serena won all four to complete the first “Serena Slam” in 2002–2003 (four straight majors, but not in the same calendar year). All told, through 2015 the sisters had faced each other 27 times, with Serena leading 16–11, including a 6–2 mark in major finals. They were both brilliant at the same time for a time, but this never developed into a true rivalry. Watching them play each other was an uncomfortable experience. Many of their matches were one-sided and error-strewn, with Serena visibly holding herself back from her usual fist-pumping, screaming aggression. They never apologized for beating each other, but they never relished it, either.3
They much preferred playing with each other. In the runup to Rio, they had a perfect Olympic record, winning doubles golds in 2000, 2008, 2012; they were also 14 for 14 in Slam finals.4 Several times, they won a doubles title together after dueling in the singles.
In a sense, their greatest achievement was maintaining their extraordinary bond despite both wanting the same thing, fiercely. Tennis was a Williams family affair, something the five sisters and their parents did together; there is even a picture of Venus pushing Serena in a stroller onto a court.5 They grew up playing against each other every day. Venus was the first to reach a Grand Slam final,6 and she dominated their early matches, winning five of the first six. But when Serena was the first to win a major, the 1999 US Open, Venus was visibly shattered.7
For their entire lives to that point, Serena had played catch-up. It was Venus who was profiled as a 10-year-old in Sports Illustrated and the New York Times,8 Venus who brought agents to the door, Venus who got the first big endorsements. As Serena once put it, in a comment that hints at the complex dynamics of the sisterhood, “Ever since I was young, even when I came on tour, it was Venus, Venus, Venus, Venus. Oh, and the little sister.”9
But from 2002 on, the little sister took over. The turning point may have been psychological as much as anything. In May 2003 Serena noted, “I had to realize that I wasn’t Venus. I used to want to be her—not like her, but be her—and I think that held me back.”10 With that separation came greatness.
Serena’s journey has been signposted with stops and starts, injuries and ailments, listlessness and passion, slumps and dominance, graciousness and the opposite. She’s been ranked Number 1 half a dozen different times and has sunk as low as 139. In 2006 Chris Evert chided her for not wholly committing herself to tennis. Prior to the 2007 Australian Open, Serena was dubbed a “lost cause.” She won that championship and then took the US Open as well. There has also been tragedy—the murder of an older sister, Yetunde, in 2003—and a medical scare for Serena in the form of blood clots in both lungs in 2011. In addition there have been a few on-court foul-mouthed outbursts, including the classic threat to shove a ball down an offending lineswoman’s throat in 2009.
The most recent phase of Serena’s career has been the most remarkable one. After a humiliating first-round exit in the French Open in 2012, she got angry, and fit. No one had ever questioned her competitiveness on the court, only her commitment off it. At age 30, when most players are thinking about what’s next, she won 102 of her next 107 matches, plus a gold medal in singles in the 2012 London Olympics.
Then came another letdown. In the first three Slams of 2014, she didn’t make it as far as the fourth round, and in the second round of the Wimbledon doubles, she lurched and staggered, unable to hit a serve. “She can’t even pick up a ball,” noted an incredulous Evert.11 Mercifully, the sisters retired after three very strange games; Serena blamed a viral infection. On the verge of being written off yet again, she then ripped off her second “Serena Slam” in what may have been the most dominating stretch of her remarkable career. None of it was easy. At the Australian Open in 2015, she vomited during the match; at the French, she played with the flu.12
Although both are still competing, the Williamses have more past than future, and it is possible to have some perspective on their achievements. Going beyond the numbers, impressive as they are, what can be said is that for a full decade, the sisters dominated their sport. That cannot be said of any other pair of siblings.
Considered that way, it does seem as if this is a story that has not been given its due. There are several possible reasons for this. The obvious one is that they are black, and tennis is still an overwhelmingly white sport. Other than the Williamses, the only other African American woman to win a major is Althea Gibson, who won four from 1956 to 1958. There is some talent coming up, including Madison Keys, Sloane Stephens, and Taylor Townsend, but there may be lingering, even unconscious resistance to seeing African Americans as the face of the sport.
A second factor is that tennis is declining in popularity in the United States. Outside a few big events, there isn’t much coverage of the sport, period. Ironically, the sisters would get more attention if American men were doing better, thus raising the profile of the game. A third factor is that in commercial terms, the most successful endorsers seem to be graciously bland. So in 2014, blond, willowy, and European Maria Sharapova—who does bland nicely—made $10 million more in endorsements13 than Serena, who hadn’t lost to her since 2004. Serena can do gracious, but bland is not one of her many gifts. Perhaps none of this mattered; it seems more likely that it all did.
In 2009 the sisters met at Wimbledon. Serena wore the shoes pictured here; Venus wore the dress on the opposite page. It was their last showdown in a major final. Venus was the two-time defending champion and had beaten Serena the previous year for the title. This time, Serena won 7–6, 6–2. As late as 2010, the sisters were ranked one and two in the world. That year, however, Venus was diagnosed with an auto-immune disease, Sjogren’s syndrome;14 between that, a balky hip, and the calendar,15 she has not been in the top five since January 2011.
But she is still the big sister. When Serena had her bizarre breakdown at Wimbledon in 2014, Venus was there, putting her arm over her sister’s shoulder and gently escorting her off the court—just as she had helped her onto it, 30-plus years before.
2010
FIRST BASE FROM ARMANDO GALARRAGA’S “IMPERFECT GAME”
T
he best umpires don’t get noticed. So June 2, 2010, was a bad day when Jim Joyce, a 22-year vet whom players considered one of the game’s best,1 became a star.
It was the top of the ninth, and the Detroit Tigers were leading the Cleveland Indians 3–0; Detroit’s Armando Galarraga (career record to that point: 20–18) was three outs from pitching a perfect game. The hitter was Mark Grudzielanek, a 15-year veteran, who at age 40 had lost a little bat speed. Center-fielder Austin Jackson was playing him slightly toward right, and shallow. Galarraga’s fastball was a little high and a little flat. Grudzielanek lined it to deep left-center—exactly where Jackson wasn’t. Jackson sprinted after it, with his back to the plate. Just as he reached the warning track, he stretched out his glove—and the ball landed in it.2 It was the play of the game, maybe of the year, and was uncannily similar to Willie Mays’s famous catch in the 1954 World Series. Every no-hitter seems to have one improbable defensive gift. Now Galarraga had his.
The next batter hit a routine grounder to shortstop. One out to go. Up came Jason Donald, the Indians’ light-hitting rookie shortstop. A fastball on the outside corner for a strike, a slider outside for a ball. Donald anticipated another pitch outside and got it, poking it toward the right side of the infield. First baseman Miguel Cabrera ranged far to his right, almost halfway to second base, then squared and threw toward the bag. Hustling over, Galarraga gloved the ball and stomped the bag pictured here with his right foot.
He and Cabrera threw up their arms in triumph.
Then Joyce swung his arms wide and yelled: “Safe!” Galarraga’s arms came down to the top of his head. He smiled, incredulous. Then he collected himself and calmly got the last out. Joyce was used to being yelled at by managers, abused by players, and booed by hometown partisans, so he was not unduly bothered that the crowd, and the Tigers, were giving him a hard time. His job was to “call ’em as he sees ’em,” and as he saw it, Donald got to the bag first. But the shadow of a doubt was deepening. As he walked off the field, Joyce was a worried man. On a monitor in the umpires’ locker room, he saw what everyone else had: he had blown the call on the last out of what should have been the twenty-first perfect game in history.
And if Galarraga’s best moment was his smile and his professionalism in inducing a twenty-eighth out, in the next few minutes, Joyce would have his own. “No, I did not get the call correct,” he said. “I kicked the shit out of it.”3 And he remembered the victim of his mistake: “I feel like I took something away from the kid and I don’t know how to give it back.” Then he cried.4 Even the Detroit press could not beat him up after that. When the reporters left the locker room, Joyce made one request to Tiger manager Jim Leyland, who had come by to comfort him: he wanted to see Galarraga. Still distraught and tearful, Joyce managed to tell the pitcher, “I am so sorry, Armando. I don’t know what else to say.” And Galarraga found the right answer: “Nobody’s perfect.”
This is perhaps the only time a terrible call ended up being a feel-good story, but the grace under pressure the two men showed made it just that. The two protagonists became something like folk heroes. At the following night’s game, Galarraga handed the lineup card to Joyce, who was working home plate. They shook hands—and the crowd cheered. One blown call, two class acts.
The incident played an important role in baseball’s expansion of instant replay, restricted at the time to disputed home-run calls. In the aftermath of the imperfect game, Commissioner Bud Selig said he would think again.5 Selig himself was cool to instant replay, and so were many players6 and managers.7 But it became faintly ridiculous that everyone in the stadium with a smartphone had access to instant replay—but not the umpires. Beginning in 2014, Major League Baseball agreed.
0 commentaires :
Enregistrer un commentaire