Josh Work Professional Organizations Trip Reports Conference Report: 2011 LISA

The following document is intended as the general trip report for me at the 25th Systems Administration Conference (LISA 2011) in Boston, MA from December 4-9, 2011. It is going to a variety of audiences, so feel free to skip the parts that don't concern or interest you.


Thursday, December 1

Today was my actual travel day. Up at 5am, out the door by 5:30, to the airport. Cleared Security with no trouble (though now they say to take the CPAP out of the bag for scanning, but they still don't swab them). Got to the gate; decided not to upgrade to first class for the $150 change fee and $756 restriction change, and then the $$$ to actually upgrade to first; boarded uneventfully; and departed (pushed back from and left the gate) on time. Unfortunately, the radar altimeter failed its self test so we had to wait for a grounds crew to direct us back to the gate and wait for maintenance to get on board to fix it. The fix was to wait a little longer for it to reset itself. We departed a second time only 38 minutes late and made up almost half of it in the air, arriving only 20 minutes late. To their credit, the crew kept us informed reasonably well.

Got my bag (quicker than expected; it was circling by the time I got to baggage claim) and caught the bus to the Blue Line and came into town (Blue Line to Government Center, change to the Green Line and as luck would have it caught an E train to Prudential, which is at most 2 blocks from the hotel. Checked in, unpacked, and headed to lunch. On my way to lunch I ran into my friend Tom who was going to lunch with a work colleague.

After lunch I did some minor work before leaving at 4:30 to meet Tom for dinner. We took the Orange Line to Kendall and then a bus to his place where we picked up a ZipCar to drive to Reading to meet more of our friends (Dave and Paulo) for dinner then schmoozing at their place. Tom drove me back to the hotel around 11pm whereupon I crashed.


Friday, December 2

Friday was supposed to be a vacation day (as I'd spend much of Saturday working). Lunch was with another local friend (Jenn) at Wagamama's in the mall (she works nearby in the Copley branch of the Boston Public Library), and dinner was with the Nolans (David, Susan, and Kelita) at Famous Newscaster's (Mary Chung's) in Cambridge.


Saturday, December 3

Since on Friday I got the notice that a large web site was ready for its QA review, I went down to the conference space (where they were setting up for the 5pm opening of registration) and reviewed the web site more or less all day. Registration opened at 5pm and I was the first one through the M-Z line since I could have the staffer omit the "welcome to LISA" spiel. It is my 18th consecutive LISA conference after all. Got my (long-sleeved gray) conference t-shirt and schmoozed with friends old and new before the Welcome Get-Together and Conference Orientation ("Newbie BOF") at 6. Unfortunately, the regular presenter (and the only copy of the presentation) was still in the Philadelphia airport so I wound up pressing conference co-chair Doug Hughes into doing it as an audience participation event. I think we did okay, considering we had no presentation and no preparation.

After that I wound up heading to Legal Seafood with Alan Clegg, Tobias Oetiker, and Steven whose last name I don't remember. I pigged out on the lobster bake (1.5-lb ish lobster, steamers, mussels, chorizo, and an ear of corn, plus a cup of clam chowder). Got back to the hotel and ran into Mark Burgess and caught up briefly, but he was on his way to being unconscious from jetlag. Chatted with Adam and Nicole for a bit before she let slip it was her birthday. Adam then treated her and me to birthday drinks (mine was about 3 weeks ago). Went back upstairs around 11:30pm or so, caught up on email, and went to bed.


Sunday, December 4

Today was a free day as far as the conference was concerned. I spent much of the morning writing up the start of this trip report, bits of the previous trip's report, and continuing the QA review of the web site.

Around 10:30 I stopped working, did the morning ablutions, and headed out to meet John, Robert, and Tom at China Pearl for a mini-motss dim sum. After that, I went back to the hotel and worked for the afternoon reviewing more of that web site.

For dinner, I went out with three folks (Grant, Lee, and Mark) to Fleming's. It was windy but not too cold so most of us walked there and back. We all went with the prime rib special; I had a Caesar salad and got a loaded baked potato (no sour cream) with mine, and had the caramel turtle pie for dessert (which came with the meal). Excellent food, though they did manage to miss bringing one of the sides. By the time the new version was in the window, we'd finished dining. The restaurant offered several ways to make it right (bring it anyhow, set it to go, take something off the bill, etc.) but since he wasn't going to eat it now, had no way to reheat it, and the meal was paid for by work, he wound up taking home a second dessert.

We got back to the hotel in time for the tail end of the Board Games night. I watched some of the Pictionary game, and some Innovation(?), before heading off to bed.


Monday, December 5

I spent the majority of the day reviewing more of the web site. I made it about to the halfway point of all the content.

For lunch I managed to sneak into the Tutorial Lunch (thanks to an inside contact who got me a ticket). Saw a lot more people I'd not managed to see yet and caught up with some of them. After lunch, more web review.

For dinner, I met up with a group of people I mostly didn't know. I'd met Peter and Steve at past conferences and had chatted briefly with John from the USENIX Board, but Ben, Carlos, and the 2 others whose names I don't remember were all folks I hadn't met before. We went to Five Napkin Burger in the mall and I wound up getting the special, which is a 10-oz beef burger with bacon, caramelized onions, and blue cheese, with Tuscan fries (French fries with an Italian herb mix and parmesan cheese). After that, we went to the hotel bar to schmooze and catch up with those who were still arriving.

After the evening activities and hallway tracking, I went back up to the room to launch a web site for work. Took a little while to jump through the hoops of getting online via the hotel network to RDP into work to connect to the load balancer to update the rules.


Tuesday, December 6

Tuesday's sessions began with the Advanced Topics Workshop; once again, Adam Moskowitz was our host, moderator, and referee.

[... The rest of the ATW writeup has been redacted; please check my web site for details if you care ...]

After the workshop I went to dinner with Bryan and Bob. We settled on splitting a couple of dishes at P. F. Chang's. After that, since there was no hot tub (boo!), I hung out in the hallways and at the bar, chatting and schmoozing and networking, until bedtime. I made the mistake of checking email and wound up having to fix a problem in my QA environment because the web server, httpd, was dying and restarting once per second (over 900,000 times on one server and over 700,000 times on the other). Managed to get to bed by 11:30pm anyhow.


Wednesday, December 7

Woke up, caught up on email, and got down to the conference floor. Had a lovely conversation with USENIX Vice President and Acting Executive Director Margo Seltzer before getting to the announcements and keynote.

Tom and Doug kicked off with some (probably over-rehearsed) comedy. This is the 25th LISA (and 22nd conference, since LISA I through LISA III were workshops at what was then the Winter USENIX conference). The standard statistics: The program committee accepted 28 papers from 63 submissions, and as of shortly before the keynote we had 1219 attendees. They thanked the usual suspects (program committee, invited talks coordinators, other coordinators, steering committee, USENIX staff and board, sponsors, media sponsors and bloggers, authors, reviewers, and our employers for allowing us to show up), gave the usual housekeeping information (speakers to meet with their session chairs 20 minutes before they go on-stage, BOFs in the evenings, and session summarizers are needed). The vendor exhibition has its usual hours (Wednesday 12n-7p and Thursday 10a-2p), with a happy hour Wednesday at 5:30pm and the raffle drawing Thursday at 1:20pm. Poster sessions are Wednesday at 6:30p and Thursday at 5:30p in the hallway near the vendor floor/reception space, the reception is Thursday night at 6:30p followed by the return of the LISA Game Show at 8p.

LISA 2012 (the 26th) will be December 9-14 in San Diego and Carolyn Rowland will be the program chair.

Next the regular awards were presented:

Finally, we got to the keynote address. Ben Rockwood, who also wrote Short Topics in System Administration 13: The Sysadmin's Guide to Oracle, spoke on The DevOps Transformation. It's a shift in the community but it's not new. It's more of a catalyst. It's a cultural and professional movement, not a tool or product and not a title. It's not just dev and ops, but also enterprise and DBA and QA and release engineering and so on. It can be described by culture, automation, measurement, and sharing (CAMS).

After the break I went to the invited talks track on newish technologies. First up was Michael Wei, speaking on Issues and Trends in Reliably Sanitizing Solid State Disks. To oversimplify, because of the way the Flash Translation Layer (FTL) interfaces between the media on the SSD and the rest of the hardware, it's difficult to truly erase SSD disks. (It should be noted that SSD disks include those ubiquitous USB drives.) However, with the Scramble And Finally Erase (SAFE) proposal and scrubbing the disks several times you can reliably erase an SSD.

The second speaker in this block was Leif Hedstrom, speaking about the Apache Traffic Server (ATS) and how it's more than just a proxy. He started with its history (it grew out of the Inktomi Traffic Server from the 1990s, acquired by Yahoo! in 2003, open sourced in 2009, and made a top-level self-sustained and -managed community in 2010). There are a lot of proxy servers out there. Choose what's right for your needs in terms of features and performance (latency is more important than throughput when the latter is 100K/sec). Forward, reverse, and intercepting proxy servers all have their pros and cons.

ATS is a caching server for performance issues, saving on the 3-way handshake setting up TCP connections, congestion control (window sizes), and DNS lookups. They address the concurrency problem by multithreading and event processing, with n per core, m threads per disk, and some control threads which talk to shared resources (RAM and disk caches, reloadable config files, stats and logs, and so on). m and n scale automatically. ATS has many many config files, but only the records, storage, and remap config files matter.

ATS is versatile and ridiculously fast (they got 220K req/sec with cache and 100K req/sec with no cache).

After the lunch break, I went to the invited talks on Chef. First up was Dimitri Aivaliotis, with Converting the Ad-Hoc Configuration of a Heterogeneous Environment to a CFM, or, How I Learned to Stop Worrying and Love the Chef. Keeping it all in your head is not manageable or scalable. They looked at cfengine, bcfg2, and puppet, but chef fit their way of thought the best. Roles allow for structure of the cookbooks.

The second speaker (who mistakenly thought he was on at 3pm not 2pm) was Jesse Robbins who spoke on GameDay: Creating Resiliency Through Destruction. He started as a firefighter during a break from IT. Operations is work that matters; we run the infrastructure the world depends on. GameDay is injecting large-scale catastrophic faults into the infrastructure. It's part of the Resilience Engineering discipline which isn't often done in the system engineering space.

Resilience is the ability of a system (host, service, network, applications, people, and so on) to respond and adapt to changes, faults, and interruptions. People and Culture both matter. Scaling OUT not UP provides resiliency. If you have three 3-9s services that depend on each other you're actually getting only 2 9s of reliability (99.9*99.9*99.9=99.7). MTTR is therefore more important than MTBF over time. Decoupling and fast-to-recover is actually better.

GameDay is a way to train people into better handling MTTR, in three phases:

Start small; it's a culture change and you'll get resistance. The exercise should be achievable. Build and increase awareness. Over time this builds confidence. Then you move up to the full-scale live-fire exercise. For the first, pick the worst survivable one, like a full data center power-down. The first time will probably be a disaster and will definitely identify problems. Run a post-mortem after the fact.

After the afternoon break I stayed in the IT space and listened to Avleen Vig's talk, The Operational Impact of Continuous Deployment. Basically he noted that continuous deployment is difficult to implement: It requires time, commitment, and effort from management, operations, and development teams.

Is having developers in production during production hours (!= 24x7) a good thing? It depends on the environment. Some only do the continuous integration part of continuous deployment [my environment is one of them. -jss]. Some don't do either; financial institutions have different requirements from a social media web site.

What about PCI compliance? They don't store any financial data; all transactions are through a third party, so PCI isn't an issue for them now. If they had to, they make the infrastructure either all compliant (hard) or just the relevant bits PCI compliant without CD or CI.

What about new services? Is working more closely with developers up front what they did? Yes. They have 7-10 people on the Ops team. Working together gets Ops involved sooner and helps avoid "We can't do that" later in the process.

How do you handle failed deployments? They have a roll-forward policy. Changes are small enough that "fix and re-push" is faster than "roll back." Their process is to Deploy from Dev to QA (then test) and then to Prod, and pushing from QA to Prod is down to 2 minutes for all 150 servers in their environment.

Finally, Kris Buytaert spoke on DevOps with his talk, The Past and Future Are Here, It's Just Not Evenly Distributed (Yet). He started as a developer, became an operations person, and is now a Chief Trolling Officer.

DevOps Days started in October 2009. It's a growing movement, reaching out to different communities and pointing out the problems they see. Only the name ("DevOps") is new. It's an attempt to break down the silos.

Continuous integration: Developers should write unit tests (cf. Hudson/Jenkins), but systems/operations should do that too (e.g., on their puppet or chef recipes etc.). We need to do pipelinings (dev to test to user acceptance testing to preprod to prod), and abort on failure. Ideally this could take place with parallel build pipelines. Operations writes code too (as in configuration files) — just in crappier languages.

The talk was more focused on webapps-based environments and not necessarily large enterprises.

After the vendor happy hour (where I didn't eat anything), I met up with Adam, Bill, Eric, and Kirk, and headed out to Grill 23 to meet Chris for the annual 0xdeadbeef dinner. I had a Caesar salad and 20-oz. bone-in ribeye, and split some sides (duck tots, or potato puffs fried in duck fat, and asparagus). The five of us drinking wine (Adam had bourbon) split two bottles of a good red zinfandel; I had an ice cream sundae for dessert, and then the six of us split a bottle of vintage port (Grahams 1980, having declined the 1941).

Adam and I had to dine-and-dash to get back to the hotel only slightly late for the 9:30pm Game Show run through. As in years past, he and I (and another tester, this time Nicole who was also our Vanna) run through all 150 questions — 5 questions in 6 categories per round and 5 rounds (1-3, finals, and tie breaker), and reorder them within categories (for example, the $500 question should be harder than the $100 question) and even reword or swap out questions to avoid problems on game day.

That finished around 11:30pm so we cleaned up and I went off to bed.


Thursday, December 8

Today started with announcements and reminders: Silence your phone, the Vendor Exhibitions close at 2, the Game Show questionnaire was due by 3 if you want to be a contestant, and there was a schedule switch (which didn't affect me).

The plenary session was "One Size Does Not Fit All in DB Systems" by Andy Palmer. Executive Summary: He talked about where databases and bioinformatics meet. Lots of different database technologies were discussed. The traditional DBMS architecture (originally designed and optimized for business data processing) has been used to support many data-centric applications, with widely varying characteristics and requirements. The commercial database market is beginning to transform into a number of segments-most notably, olTP vs. data warehousing-each of which is defined by a specific workload. Innovative "purpose-built" independent database engines are now being widely adopted to satisfy specific workloads. Picking the right engine for a given workload is a new challenge, and doing this well can offer discontinuous benefits in terms of performance, scale, maintainability, and concurrency.

After the morning break I went to Bryan Cantrill's invited talk, "Fork Yeah! The Rise and Development of illumos." Illumos is an open-source descendant of OpenSolaris (based on Solaris 10, or 2.10, or SunOS 5.10). A quick history refresher: SunOS 4.x became 5.x as Solaris; 2.1 shipped and sucked, but wasn't stable or good until 2.5. 2.5 had to get it right because new hardware (UltraSPARC-I) depended on it. Sun was committed to defeating Windows NT. On Scalability Day (May 19 or 20, 1997), Microsoft tried to show Windows NT was scalable. It was a sham, and everyone — even the Wall Street Journal — knew and admitted it.

They managed to get a lot of talent in the early 2000s to do OS innovation. By 2001 the OS worked in the 4.x-to-5.x model. It was solid... but now what? Embrace radical ideas! These include:

All of this innovation was by individual engineers. People, not organizations, innovate.

The rise of Linux and x86 drove the OS cost towards zero, so open sourcing the OS became the right business decision. Sun couldn't open source it immediately because it was heavily encumbered by third-party stuff, requiring renegotiation. DTrace (January 2005) and the rest of the new stuff (ZFS et al followed (in June 2005). They released everything they could, and provided the Common Development and Distribution License (CDDL), a file-based copyleft license.

So now we have OpenSolaris with a few proprietary bits (like libC internationalization, proprietary drivers, od, tail, and some other internal bits). You had the right to fork your own release but you didn't have the power to do so. Thus OpenSolaris remained a Sun puppet and some of Sun's middle management thought they were puppeteers. They followed the Apache model with no forks (thus governance discussions). Also, there were pretty much no non-Sun folks who could implement a non-vendor fork. And if you wanted your own code in the Sun release you had to give them the copyright.

Fall 2007 (2 years later) Sun wanted to create a new OpenSolaris-based distribution called OpenSolaris and pissed off the OpenSolaris Governing Board (OGB). Sun didn't want OGB so the community got deflated and more or less gave up for the next 3 years.

Then Oracle bought Sun in 2009 closing in 2010. Scott McNealy eulogized Sun with "Kicked butt, had fun, didn't cheat, loved our customers, changed computing forever." That happens to be the antithesis of Oracle, so Oracle made it clear they had NO interest in OpenSolaris. (Oracle actually thought about CLOSING the system!)

"What you think of Oracle is even truer than you think it is." — Bryan Cantrill

"Oracle's purpose is to make money." — Larry Ellison (paraphrased)

Outside Oracle, starting in summer 2010 several people began rewriting the formerly closed bits from scratch or porting them from BSD. By early August an early system was booting. Announcement on August 3, 2010, was code and a demo. Intent was to be an open downstream repository of OpenSolaris.

On August 13, 2010, Oracle circulated a memo closing OpenSolaris on a daily basis in violation of the community. They never publicly announced it, and while they said they'd release major features to the open source community they lied: Solaris 11 came out on November 9, 2011, and nothing's come out via CDDL on it.

This accelerated the Solaris diaspora, so within 90 days the entire DTrace team, all primary inventors of ZFS, and primary engineers for zones and networking all left Oracle. Luckily for Illumos they all moved to them. Irony: Illumos doesn't take copyright away from the developer, but it's a CDDL license. ZFS, DTrace, and Zones are making new innovations that will never be in ORCL Solaris.

After his talk (which could be considered a rant), I stayed in the room for Veera Deenadhayalan's talk, GPFS Native RAID for 100,000-Disk Petascale Systems. It was a bit of a product-placement. He talked about the challenges with traditional RAID and disk drives (too many rebuilds given MTBF, bit rot, and so on); and the solutions they added to native GPFS for it (like declustering). They're doing 8 data/3 parity with 4-way replication across a 47-drive declustered array in a 384-disk enclosure.

Data integrity from undetected disk errors (not media errors) is another problem. They use checksum and version numbers, but the checksum isn't validated — it needs to be stored on another disk. Integrity management background tasks: Rebuild, rebalance, verify, and schedule it all opportunistically.

This is basically a product for high-performance supercomputers with declustered RAIS and end-to-end checksumming, all with no hardware raid controller.

After the lunch break I hallway tracked instead of going to any specific tasks. Had some meetings about organizational politics and how to deploy new technologies.

After the afternoon break, I went to Jacob Farmer's IT, "My First Petabyte: Now What?" He put together 2 hours of talk but only had 1 hour to talk.

There are four growth strategies:

There are also five stages of data proliferation grief:

Few have a petabyte now, many more think they will in the next few years. Some of the challenges are:

Modern arrays are striped using RAID 6, vertically across cabinets, but 48 60 or 84 drives in a 4U cabinet with 3TB/drive, but restriping and replacing a 250-GB FRU can be very time consuming.

Backing up a petabyte can easily take months if not years, depending on pipeline and tape (or remote disk) media. Replication isn't enough (e.g., replicating corruption).

Finally, libraries reverse things from our point of view. We tend to link live media is primary and backup is secondary; libraries tend to say the gold or master copy is in the protected backup location with checksums and hashes and so on, an the copy on spinning spindles is just a cache.

The second talk in this block was Larissa Shapiro's "Can Vulnerability Disclosure Processes Be Responsible, Rational, and Effective?" ISC produces critical infrastructure software and services upon which the Internet and telecommunications industries depend. Through our Phased Vulnerability Disclosure process, we provide rational disclosure of vulnerabilities through a series of notifications, so industry can prepare without rushed actions, and critical infrastructure can be upgraded without "bad guys" knowing about the vulnerability. As an organization dedicated to open source software and open process, ISC is publishing the policies, processes, and tools involved. Please join us as we walk through a new model that vendors and operators can use to roll out security fixes without adding to operational risk.

I went upstairs to change before the 6:30pm reception. The food was pretty good; I had some sliders, fries, onion rings, hot wings, pizza, and an ice cream bar. I was actually in the slide show from the past 25 years a couple of times.

Shortly before 8pm I went across the hall to help with the Game Show setup. We went up a little after 8 and took about 90 minutes as usual (though why we were only scheduled for 60 minutes baffled all of us). It generally went off without a hitch, and we had fabulous prizes for all 9 of our contestants.

After the Game Show I made an appearance at the Scotch BOF for a while and chatted with folks there and nibbled some munchies and drank some decent scotch (and a lot more water). Gave up the ghost around 1am and got to bed by 1:30am.


Friday, December 9

Today started with John D'Ambrosia's invited talk, "Ethernet's Future Trajectory." John chairs several IEEE standards task forces and been involved since 1999. To oversimplify, 10GB ethernet was a success for servers and networking backbones but not for desktops. Aggregation is key (and a distributed problem). Ethernet is growing, there are new efforts targeting 10GbE, 40GbE, and 100GbE. Standards are cool, but they need to ship and make money.

After the morning break, I went to Wade Minter's talk, "Customer Service for Sysadmins." It was more interactive than most talks; he took advantage of the aisle to work the room. (If everyone he interacted with spoke up or was mic'd it'd've been better.) We started off with examples of customer service, both bad (rigid, poorly automated, and no responsibility) and good (flexible, empowered, acknowledgement, get a human, follow-up from initial contacts, and empathetic).

We tend to have five types of customers:

(These are all his words.)

Dissatisfied customers are an opportunity: If they're taking the time to complain, they're still emotionally invested in the product or service and want it to be successful. Work with them; more empathy, more patience. Optimize the bugs in the relationship.

Some other things to do:

In summary, we need to be empathetic (and induce empathy in others), be honest and open, treat how treated, and don't be a BOFH.

After the morning break, nothing grabbed my attention so I hallway tracked again. I went to lunch with David, Lee, and Nicole, and Mark joined us later.

After lunch I went to David N. Blank-Edelman's invited talk, "Copacetic."

During a hiring search, he talked to far too many sysadmins who wanted to use best practices at their workplace (configuration management, DevOps techniques, and so on) but felt personally disempowered to do anything but fight fires or perpetuate the current environment, as well as to those who had lost the enjoyment of working in our field. Some of these unhappy souls were managers; some of them were managed. These interactions harshed his mellow so much that he became determined to find a way to extract them from both their internal and their external mire. There's fabulous research being done on the nature of happiness, motivation, and reality hacking that is directly applicable to our field. This talk was intended to provide surprising but practical ways to improve our happiness or the happiness of sysadmins/DevOps who work for us.

There are two kinds of mindsets: fixed and growth. In the fixed mindset, traits are givens and qualities are carved in stone. Every situation calls for a confirmation of intelligence, personality or character; every situation is evaluated (success/fail, smart/dumb, accepted/rejected). It requires a diet of easy successes. For the growth mindset, basic qualities can change over time through efforts and experience, and everyone can do that. Mistakes are opportunities for learning. He asked us to think about which of these we fall into. This controls the meaning of the words failure and effort.

Can you change your mindset? Yes (and he pointed us to mindsetonline.com).

Next, Jon "maddog" Hall spoke about Project Caua.

In Brazil, 86% of people live in urban areas, and Rio de Janierio is huge and dense. Much of their technology is running on free software. Project Caua's goal is to create millions of new, private sector high-tech jobs, making computers easier to use, create more environmentally-friendly comuting, decrease cellular wireless contention, create gratis WiFi bubble over urban areas, create low-cost (or even free) supercomputing capability with software, hardware, and business materials are open and free using sustainable private sector funding.

How? Put high-availability servers running free open source software in the tall buildings' basements on UPS and generators and then use thin clients (less electricity and heat) connected to server by high-throughput Ethernet and POE. HPC grid formed by the servers. SA/Entrepreneur will own this as a business. Each SA can start a new business with a base business plan template, marketing materials, and training/certification. The SA/entrepreneur can be shared across multiple small (1-5 people) companies.

Most people can do everyday tasks, but irregular or rare tasks (e.g., backups, patching) are hard to remember and thus hard to do. Support is moving further away and wasting $7.5B daily (1.5B desktop computers and averaging 15 min/$5 day each). We lose $265M/day for brownouts crashing servers intermittently.

They're going after several markets: small/medium businesses (no SA), apartments and condos (homs automation, multimedia), hospitality (hotel/restaurant), and point-of-sale (POS) terminals.

Thin clients (10W not 350W) are like embedded systems, with USB 3.0, wireless mesh router, and so on, but no fan or moving disk thus no noise, and a long life. If they're built locally in Brazil they don't have to pay the 100% tariff for things coming in. Use open BIOS and open drivers. Run off 12V (universal voltage).

The network will be a mesh LAN; if enough of the routers are direct-wired, this is 1-2 hops at most to get to the Internet. The server machines will be multi-core, with CPUs, memory, and disk is turned down when not used.

194M Brazilians, 167M in urban environment, 400M thin clients + 70M POS 300 clients per server is 1.3M HA servers and 2.6M systems — assuming 100% penetration. 10% is 40M thin clients or 200k jobs — just in Brazil! Then add the rest of Latin America.

They're in discussion with the government, telcos, banks, and housing/communications projects. A field test of v0.5 is scheduled in February 2012. The problem is getting people to believe that geeks can run their own business and sell stuff and be entrepreneurs. They expect v1.0 in May-June 2012.

Remember, all the software, hardware, and designs are online and free. Surprisingly, this might be applicable to major inner cities in the USA (with high densities, tall buildings, and so on). One question was regarding wiring the buildings, they're planning on using either copper or modified fiber, but it has to carry power as well. They'll start with new buildings, and the loan covers the cost and is amortized over 10 years.

After the afternoon break, I went to the closing plenary session, "What Is Watson?" delivered by Michael P. Perrone.

The TV quiz show Jeopardy! is famous for giving contestants answers to which they must supply the correct questions. Contestants must be fast, with an almost encyclopedic knowledge of the world and the ability to figure out clues that are vague, involve double meanings, and frequently rely on puns. Early this year, Jeopardy! aired a match involving the two all-time most successful Jeopardy! contestants and Watson, an artificial intelligence system designed by IBM. Watson won the Jeopardy! match by a wide margin. In doing so, it brought the leading edge of computer technology a little closer to human abilities. This presentation will describe the supercomputer implementation of Watson used for the Jeopardy! match and the challenges that had to be overcome to create a computer capable of accurately answering open-ended, natural-language questions in real time-typically in under 3 seconds.

Google requires us to think how to phrase the query in a way it understands. Watson doesn't; it has to handle NLP. They trained it by parsing 350 or 450TB of reading material to parse it to convert to syntactic frames (subject, verb, object) and then to semantic frames with a probability value ("fluid is a liquid" "his speech was fluid"). Have to add temporal reasoning, statistical paraphrasing, and geospatial reasoning.

Watson can beat the best human now, so what's next? Broader scientific research and business applications.

A group of us — Amy, Andy, Bob, Frank, Rachel, Simon, Ted, and I — went to dinner at McCormick & Schmick. Unfortunately I was unimpressed. The service was generally fine (if slightly slower than ideal, but it was a Friday night), but the kitchen was not having a good night. Our server misheard the requested calamari as California roll, so that appetizer was delayed in arrival. One person's steak was significantly too well done. My salmon was overcooked (mainly by sitting under a heat lamp too long). They did comp the coffees with dessert because of the problems. I have no plans to return to Boston before LISA XXX in 2016 anyhow, so it's not like I'd be able to go back soon anyhow.

Got back from dinner in time to grab the last of the tangerines from my room and bring them by. Caught up with Peg and chatted with a bunch of other folks before turning into a pumpkin around 11:30pm. Made my goodbyes around the room and confirmed with Philip when to meet in the lobby in the morning.


Saturday, December 10

Today was my travel-and-rush day. Despite getting to bed late (around 1am), I was still up around 5am. Caught up on some email, fixed a few problems at work (two machines had disks filling up and two tiers' endpoints suspending), finished packing, and got to the lobby in time to meet Philip and share a cab to Logan. Got there, through baggage-drop and Security with no trouble (and surprisingly no overweight bag fee; the desk agent called it 50 lbs. instead of the closer-to-55 it probably was, saving me (thus my employer) $70.

The flight itself was uneventful. Mild turbulence on both takeoff and landing, but both were on-time. Got my bag (after a longer-than-expected delay), got to the car, and got to Mom's by 1:30pm or so via a quick lunch at Steve's Deli (corned beef and pastrami with swiss cheese and russian dressing on rye, one spear each new and old dill pickles).



Back to my conference reports page
Back to my professional organizations page
Back to my work page
Back to my home page

Last update Feb01/20 by Josh Simon (<jss@clock.org>).