The Blog of Curtis Chambers

Archive for the ‘Technology’ Category

Disaster Response Technology and Local News

with one comment

In wake of the rampant wildfires in southern California over the past week, I thought I’d talk a little bit about the use of technology in keeping the public aware of the disaster’s progress.  Most people know of my particular disdain for local news, mostly because of their sensationalism of mundane or outdated stories for the sake of ratings.  Here’s a classic example of why I hate local news:

 

Almost nothing that they say on local news is something that I haven’t already read or seen on the Internet, and if it is something I haven’t already consumed on the Internet, it usually doesn’t interest me.  That said, having the TV stations do nothing but 24-hour coverage of the fires didn’t help that opinion.  They asked dumb questions in a repetitive fashion, replayed the same video day-old video footage like it was new and they rarely give you the information that you needed.  However, the Internet had a plethora of great resources to give you the information you needed and did it faster than the TV stations.

KPBS, which is traditionally a non-profit TV station, had the best Internet coverage as far as I could tell.  They set up a Twitter account and posted up-to-the-minute updates with the most important information.  I’ve never really been a fan of Twitter, but this is a really good use of the technology.  Twitter allows you to get updates by looking at the site, subscribing to their RSS feed, receiving IMs or text messages to your cell phone.  All of these methods are free to the user and keep you in the know faster than TV news and without all the garbage associated.

Google Maps Fire

KPBS also set up a Google Maps mashup with all the fire information in geographical form.  It showed where all the fires were burning, where the shelters were set up for both people and animals, which roads were closed, which neighborhoods you were allowed to re-enter and a lot of other miscellaneous data.  It was also updated as soon as data came in from the authorities.

The San Diego Union-Tribune set up a special fire blog with updates as well.  There were also hundreds of other bloggers posting firsthand updates about the fires, shelters, etc.  Local news just can’t compete with citizen journalism because they can’t scale to the sheer number of people available to report.

Facebook also helped me keep track of how my friends in the area were doing, as they updated their status and posted comments on other people’s walls.  It allowed me to more efficiently find out how people were doing and keep the phone lines clear for emergency personnel to use.

So what does it all mean?

It means that people are starting to get used to information being tailored to their needs and available in multiple formats, rather than passively viewing it on TV.

It means that given the opportunity, people will organize and distribute quality information by themselves in order to help others.

It means that the few major media outlets will have less control over information flow in the future and that small armies of dedicated citizens will give people a choice when it comes to the type of information they want to receive.

This is a Good Thing™.

Written by Curtis Chambers

October 27, 2007 at 11:39 pm

Infophilia and the Convenience of Technology

with 7 comments

After reading Brad’s post on Infophilia, I can’t help but think that I have the exact same symptoms. I’m addicted to learning new things and acquiring information. I can’t stop and if I run out of things to read in my RSS reader, I just start reading anything I can get my hands on, even if it’s something as mundane as the back of a Lysol bottle. Here’s a few of the things I’ve been actively working on learning in the last week using a variety of books, news articles, software products, museums, and websites:

  • French
  • Japanese
  • Barack Obama’s history
  • JFK’s history
  • Random facts about Boston and Cambridge
  • How to alter the autocomplete functionality in Drupal
  • How to analyze football stats to produce the ultimate fantasy football team
  • Swarming algorithms involved in various methods of P2P file transfers
  • How to make applesauce

Applesauce

*The above picture is an artist rendition of how Amy makes applesauce

A lot of people joke with me about how much I’m on the computer, but it really is just a means to achieve this information overload in a more convenient and efficient way. I remember in elementary school and junior high, I would spend the entirety of my time after school at the library reading until it closed and then go home and log on to BBSes and the early versions of the World Wide Web to read more. I’ve always had this constant desire to consume information in its various forms and the Internet just makes it even easier, especially with news.

Google Reader tells me that in the last month I read an average of about 140 stories a day. While I believe that RSS readers such as Google Reader have made it so much easier to keep up with the news compared to traditional websites and newspapers, there is still a long way to go in the social news world before it is truly efficient. While I read an average of 140 stories a day, I only shared/starred an average of 6 stories per day. That means that only 4% of the stories that were delivered to me were good enough to share with others or keep in my stash of bookmarks. I don’t have any statistics on how many stories in an average newspaper that people enjoy, but I’d imagine it’s somewhere near there. But the computer is a tool that should make this process more efficient, and I’m hoping that with the coming generation of social news services that analyze your reading patterns to deliver more relevant stories, that the number will increase to at least 50%.

But it’s not just limited to news. Technology also makes other types of information more readily available. For example, Rosetta Stone makes it incredibly easy to learn a new language in the same way that you learned your first language and you can do it anywhere you have a computer. There’s also eBook readers that allow you to hold as many books as you want in a single device. I have a few books loaded into my iPhone so that when I’m standing around waiting for a bus or subway I can just whip it out and read right there.

I do have one fear associated with this Infophilia (disorder perhaps?). I notice that the more I learn and the more information I consume, the more my memories of the past seem to fade away. It’s as if my brain is a hard drive that’s running at capacity and keeps deleting old files to make room for new ones. While I love having all the latest and greatest information, there are some older memories that I’d really prefer not to lose. The fact that my digital photo library only starts at 2001 is rather disheartening, as I’m afraid that at some point in the future I’ll have to rely on it to trigger memories of the past.

So my challenge to Brad in his quest to unlock the secrets of the human brain is to find a way to unlock the other 92% of my brain that I supposedly don’t use. I could use the extra gigabytes.

Written by Curtis Chambers

October 9, 2007 at 1:56 pm

How to hack the iPhone with AppTapp

with one comment

AppTapp ScreenshotHacking the iPhone was pretty difficult until NullRiver came out with this amazing application called AppTapp that automates the whole process.  Now you just run an installer and it adds Installer.app to your iPhone, which is a graphical package manager similar to apt-get or yum for Linux.  You can choose from a ton of applications that people have developed for it and it automatically updates all the applications when you click on Installer.app on your phone.

Here are the simple instructions for getting 3rd party apps on your iPhone:

  1. Go to the AppTapp beta site and download the appropriate version based on OS and iTunes version
  2. Quit iTunes and double-click the installer to install it onto your phone
  3. You know have a new icon on the Home screen called Installer that you can use to automatically install/update a large number of 3rd party apps.

There’s a lot of really nerdy apps in there such as Python/Perl interpreters, but there’s also some interesting ones.  Here’s a rundown of what I thought of some of them.

  • SummerBoard:  By far the best application for the iPhone.  It allows you to basically change the way the Home screen looks and acts.  I highly recommend it for anyone that is planning on installer more than two 3rd party apps.
  • Community Sources:  Gives you access to other community-maintained repositories, thereby increasing the number of applications you can install.
  • Books:  This application allows you to download eBooks to your iPhone for reading while on the go.  Manybooks.net is a good complement to this application as it has a ton of public domain and Creative Commons books for free that you can save.  The only downside to this application is that you need to somehow upload the books to the phone, but there’s documentation explaining how to do this with either a script or FTP.
  • OpenSSH:  This is a good one to have as it allows you to use SSH to transfer files and run commands.
  • Term-vt100:  This is a good one to have if you need to execute commands on your phone.  I personally use it to administer my Linux boxes with SSH when I’m not in front of my computer.  Running top on it can be fun as well if you’re into seeing what’s going on behind the scenes on your iPhone.
  • iBlackjack:  This game is extremely buggy as it doesn’t have all the rules plugged in yet, but once it gets a little love I’m definitely going to use this to waste a few minutes while waiting in lines.
  • Tap Tap Revolution:  This one just came out and I’m not a huge fan of it, but it definitely shows some creativity in how to use the touch screen for games.

There’s a ton of other packages as well and the list seems to grow daily, but these are the ones I recommend playing around with to see the full potential of the iPhone.

Written by Curtis Chambers

September 13, 2007 at 9:26 pm

Posted in iPhone, Technology

Digital Communism

leave a comment »

A trend that I’ve noticed with the rise of “Web 2.0″ and open source software is something that I call Digital Communism.  The concept is similar to regular Communism in the sense that everyone pitches in for the good of the populace, but doesn’t relate to economic systems as much as it does our digital lifestyles and software.  Here I will present the different classes of users that power Digital Communism so that I can better illustrate what it all means.

Contributors

There are many ways that people contribute to the Digital Collective.  People write articles in Wikipedia, upload videos to YouTube and submit news articles to Digg.  None of the people doing this get any sort of financial gain, but rather do it because they want to share their knowledge and media with others.  In reality, a lot of it is probably powered by the narcissism of the current generation wanting to be noticed in an increasingly anonymous society, but it’s a different type of currency than money; it’s social currency.

In the realm of open-source software, these are the people that submit their code to the world for scrutiny and improvement.  They are people like Linus Torvalds, who started a small software project as a hobby that eventually turned into Linux, which is the operating system that powers the Web 2.0 revolution.

Contributors make up about 1% of a particular community’s user base.

Participants

There are also many users that don’t necessarily contribute to the Digital Collective, but they actively participate by leaving opinions, correcting mistakes or tagging items.  Rather than create uniquely new content, they edit, critique and help organize the contributions of others.  In some communities, this has the great benefit of improving the work and offering alternative perspectives.  In others, it is not so valuable.

Participants in the open-source community are extremely valuable as they find and report bugs, help fix bugs or even assist with documentation.  Some might say that the participants are even more valuable than the contributors as they help improve the quality of the raw contribution.

Participants make up about 10% of a particular community’s user base.

Passive Users

The major critique of Communism is that not everyone does their fair share and that holds true in Digital Communism.  The passive users of the Digital Collective are the ones that absorb the information but do not interact with it.  They read, they watch and they listen but they do not want to be heard.  However, that does not mean they are without value.  Without consumers, production would be for naught.

Users in the open-source community give a particular product a base of users, which increases its clout as a product.  Firefox claims to have almost 400 million downloads, which gives it a lot more exposure than if it was only used by some guy in his basement.

Also, over time users tend to become participants, who then in turn become contributors.  One example of this is Facebook, which used memcached to make its site faster, but then needed to make it better so they fixed some bugs and now they’re the biggest contributor of code to the project.

Passive users make up about 89% of a particular community’s user base.

As you can see, the different types of users reflect the different statuses of the users.  In fact, the distribution of users sort of resembles the distribution of the medieval caste system.  Back then, you had one ruler with a small group of advisors and aristocrats, and a huge lower class of peasants working in the fields.

The industrial revolution then brought many of the lower class up into the middle class.  The real question is if the same will happen with Digital Communism.  If a large majority of the users start participating with the media, what would happen?  It could either trigger the Golden Age of Information or perhaps go the complete other way and degrade the quality of information by saturation.  It will be interesting to see how it all turns out.

Written by Curtis Chambers

August 15, 2007 at 6:11 pm

The Future of TV and Movies

with 3 comments

I’ve been saying for awhile now that television as we know it today is on its way out.  With major networks putting their shows online for free and on iTunes for $1.99, the Internet is the future of video distribution.  The TV itself might not go away as we still need a screen to watch everything on, but I think traditional broadcasting and the big media moguls’ days are numbered.

There’s a few reasons why this will inevitably happen.  One reason is because of the insanely hectic schedules that people have now.  No one has the time to religiously watch shows during the standard time slots anymore.  TiVo and On-Demand were baby steps towards what is the grand Internet video paradigm of “anything I want to watch, whenever I feel like it, without having to remember to record it.”

Another reason for the shift is because the Internet knows no bounds.  Currently, there is 3 hours of primetime per night and only 5 major networks.  There’s a time limitation on the amount of premium content that can be shown, which makes it very hard to get into those few spots as an artist, but also makes it so networks can charge huge amounts of money to advertise during those shows.  As Terry Heaton said in a paper of his, “Why pay a $500 CPM for a television ad that estimates the thousand people when an online ad will honestly deliver those thousand people?  It makes no sense.”  A $500 CPM rate is unheard of in the online world (a $20 CPM rate is pretty good online), yet television shows garner huge dollars for untargeted audiences.  That money is headed directly for the Internet once the business world realizes the power of online advertising, and it will be spread among a much wider range of content providers while being targeted specifically to the viewer of the show.

Another major benefit of the Internet is exposure.  In the old model, even if you’re an amazing actor, director or cinematographer you still have to jump through hoops and might never be able to make something that people see.  With the ability to create and post all the video you want online, it’s up to you to make sure people can see your creations instead of some suit with no creative talent at all.  However, that begs the question of how does an aspiring artist market their work without millions of dollars behind it?

That is the real question, and it is currently being answered in a variety of ways.  Creative marketing techniques are coming out of the woodwork by the creative people behind the works themselves.  My favorite example of the new school of Internet marketing is a feature film put together by Arin Crumley and Susan Buice called Four Eyed Monsters.  They made the film by putting $100,000 on their credit cards and posted the entire film on YouTube for anyone to view.  However, they made a deal with Spout.com to receive $1 for every new user they created for Spout.  So far they’ve made $35,443 from that revenue source.  They also made it possible to download DRM-free, high quality versions of the film for $8, and you can also purchase extra materials as well.  I threw down $8 for it because the first 20 minutes of it I watched on YouTube looked great, and I’m sure several others have out of the 724,198 people that have watched the YouTube version.  Even if only 2% of those people purchased it, they’d make a profit just from that.

The technology is also driving the democratization of video.  Miro, formerly known as Democracy Player, came out in its first public preview today and it looks amazing.  It already has over 1,400 channels of video and all of it is free and created by independent filmmakers or organizations that support free video.  It utilizes BitTorrent for downloads so there isn’t server congestion for popular videos.  It even searches all the major video sharing services like YouTube and can save them to your computer.  All I have to say is…

This is the beginning of the revolution.

Written by Curtis Chambers

July 18, 2007 at 12:34 am

The Graphical Keyboard User Interface — Followup

leave a comment »

Original post here.

It seems that there’s a lot of active discussion going on about this right now. There’s a new blog by Clay Barnes that seems to be focused solely on this issue, and he’s posted two articles about the mouse’s decline. He also references a couple of great posts by Jeff Atwood (I’m a huge fan of his blog). The first one talks about going commando and weaning yourself off the mouse. I think that’s a great idea and increases productivity by leaps and bounds. As I said in my previous post, I only use the mouse when I absolutely have to. His second post about how Vista makes it easier to find things via the keyboard is good, but I wouldn’t be a true Mac zealot without saying that Microsoft ripped it off from OS X’s Spotlight feature, which was then copied and made better by Quicksilver.

I really like this new trend of research in the keyboard navigation arena. Perhaps the resurgence is due to the old school DOS/UNIX junkies getting nostalgic for the days when all you had was a keyboard. I remember using the a DOS word processor called Textra back in the late 80’s, and it was controlled entirely using the Function keys. F1-F10-F7 saved a document. Each time you pressed a function key it changed the menu options along the bottom of the screen. Pico/nano in UNIX are similar, only they change the menu options along the bottom when using Control-key combinations.

I remember in college I was a total emacs guy, and got very adept at using all the various tools using Control-key combinations. I’m now lazy and use TextMate, but at least it integrates well into the shell and has bundles. I’m really hoping to get deep into vi, but I just haven’t had the time to learn more than the basic shortcuts. Any great tutorials that people can link me to?

Written by Curtis Chambers

July 9, 2007 at 11:57 am

Posted in HCI, Technology, Usability

Computer science without math

with 5 comments

There’s an ongoing debate about this article over on Slashdot right now. While I completely disagree with the author of the book that math shouldn’t be part of computer science, I do believe that a lot needs to be changed in traditional computer science education.

First off we need to look at the traditional definition of computer. Back in the day, the word computer referred to anything that performed computations, even a human. If I sat in a room and punched numbers into a calculator to perform computations, I was a computer. We then created mechanical computers that could perform those same calculations even faster than we could, and subsequently we created the electric computers that we all use today. But the basic word “computer” comes from something (or someone) that performs sequences of computations, and those sequences of computations are called algorithms. This is why when you major in computer science at a university, a good majority of the time is spent on analyzing algorithms and the math behind them.

So from a historical standpoint, I can understand why a degree in computer science is very math intensive. A pure computer scientist is only concerned with algorithms and how to compute them. However, computers have come a long way in the past several years and one could argue that even though there are underlying algorithms that power the programs we use, the majority of people do not use computers for computations anymore, but rather for communication. The computational aspect of computers has been abstracted for the most part.

For example, the average computer user only uses the computer to browse the Internet, instant message/e-mail friends, manipulate digital photos and listen to music. According to an NPR survey, 92% of Americans under 60 have used a computer, 75% have used the Internet, 67% have sent an e-mail and 68% use a computer at work. So while 20+ years ago the users of computers were mostly computer scientists performing calculations, today we’re in the minority of computer users. In my mind, this means that the field of computer science needs to be broadened beyond pure algorithmic study.

Due to the acceptance of computers and the Internet by the mainstream population, we now have a great deal of non-computational issues that should be discussed that deal with the communication, business, and legal aspects of computing. Things such as privacy, security, intellectual property and media distribution are things that are studied in grad school, but yet seem to have no means for discussion in the current undergraduate system. Should there not be discussion on what a computer scientist creates before they create it, and whether it should be created at all? A similar ethics question is typically posed to traditional scientists as well in regards to things such as the atomic bomb and cloning. Just because we can create it doesn’t necessarily mean we should, right?

I think that computers are becoming more than tools and are starting to actually shape society. People are using them for communication, personal connections, business and many other things. Even some people are proposing new job types such as Director of Metadata. Perhaps these things should fall under other fields of study such as communications, sociology and philosophy, but I find that those departments haven’t adopted studying these new technologies very quickly. What can we do to educate people about the new fields of study in relation to computer science?

Written by Curtis Chambers

July 8, 2007 at 2:27 pm

Posted in Technology, Thoughts

Follow

Get every new post delivered to your Inbox.