Restaurants we like and touristy-kid stuff to do in Galveston

A long time coworker of mine is visiting Galveston for spring break with her family, and she asked about good restaurants and fun things to do with kids. I’m posting my answer here since it could be useful for my other friends in the future.

Our favorite restaurants are Mama Teresa’s for pizza, Cafe Michael Burger for burgers, and Shrimp and Stuff for fried seafood. Fair warning – none of these places have a beach view. If you want a beach view AND burgers try The Spot.

You have to go to LaKing’s Confectionery for ice cream, sweets, coffee, milkshakes and the antique taffy machine. Then for the local experience walk a few blocks over to mod coffeehouse for more coffee and then Hey Mikey’s for more ice cream. You’re on vacation, right? Double your fun!

Obvious stuff that’s fun for tourists and children: * The historic ship Elissa and the associated seaport museum. * All our other museums * Schlitterbahn Waterpark * Moody Gardens (you could spend a day or more here)

Oh don’t forget brunch or lunch at Sunflower Cafe and Mosquito Cafe. They’re a block away from each other so I suggest second breakfast at one, elevenses at PattyCakes Bakery (also a block away), and then lunch at the other. They both will have long lines at peak times.

If it’s nice weather then check out Galveston Island Brewery, assuming you like beer. It’s very kid friendly they have a play set and yard games.

TIL Cardinality in Alloy* will happily return negative numbers

Figuring out how cardinality works in Alloy* was fun.

First off I had to ask the TA how to even get a count of a set in Alloy* which looks like this
#{SomeFormulaOrSet}
Within it you can’t use the quantifiers such as all, some, lone, etc.

Then I had to understand how bounds work. For instance
run SomePredicate for 2 Int
bounds the system by the bitwidth of 2 which is -2, -1, 0, and 1.

Come to find out, the cardinality of something in Alloy* is limited to that same bounded list of integers and it will happily overflow!

Thus 0 = 0 and 1 = 1, but 2 = -2 and 3 = -1, etc. The first time i saw this was getting -2 when I expected 2. That really threw me for a loop.

In hindsight this makes sense, but I still can’t find this explained in the Alloy documentation. It’s probably in one of the tutorials. Thankfully stackoverflow came through when I searched for alloy* negative cardinality.

I may start doing more of these short Today I Learned posts. Idea credit to Josh Branchaud who posted his repository of TILs to Hacker News and borrowed the idea from thoughtbot. And of course this is all just a boring-software version of reddit’s TIL.

Time flies when you’re having fun

I didn’t post about it at the time, but 2014 was a busy year!

2015 has been busy too:

  • Started a Master’s in Software Engineering from the University of Texas.
  • Bought a house
  • Visited Omaha, New Orleans, and Asheville
  • Attended 4 weddings and missed one other

2016 will be mostly surviving grad school and missing out on travel, rugby, a social life, and being fit.

But I’ll have a lot to blog about. So I got that going for me, which is nice.

RIP SSL 3.0

RIP SSL 3.0. Long live TLS.

We use IIS for the marketing site at work as soon as I saw this news break I started poking around its https settings and found nothing. Ended up Gogling the issue and of course it’s a registry edit that needs to be deployed to every server and added to the server setup documentation.

I couldn’t find the official IIS documentation for this so I put my faith in DigiCert’s instructions on disabling SSL in IIS. Later tests from the Qualys SSL Test site prove it worked. Ended up finding the actual docs for disabling SSL in ISS while writing this post.

Timeline: Tuesday announce by Google, Wednesday fix committed by me, Thursday fix deployed by IT. If I’d caught the news sooner IT may have gotten it Wednesday.

There’s no downside to this unless you still have IE6 users that need to access your site via https. We only have one page that uses https and the rest of the site doesn’t support IE6 anyway.

More concerning are the results of the Qualys test. IIS 7 apparently still supports SSL 2.0 and doesn’t have support enabled for TLS 1.1 or 1.2. Guess I’ll be sending out one more registry patch to IT on Monday. (Less worried about SSL 2.0 since IE7 had it disabled by default, but the later TLS versions really need to be on.)

You kids get off my lawn with your Meteor JS

I have a few friends who swear by Meteor, but after reading over the website I’m still not sold on using it over Sails.js, CompoundJS, or any of the other node frameworks.

I’m not trying to knock Meteor down, but I do want to explain to my friends why it won’t be using it for the time being. I feel that Meteor wants to solve a lot of problems that I don’t care about.

Maybe that’s because instead of making a lot of web apps I’m maintaining a B2B marketing website every day. Maybe I’ll look back on this post a year from now and kick myself. But for now, let’s go over what baffles me about Meteor.

  1. “Writing software is too hard and it takes too long.”
    • I like writing software. Taking all the hard away from something doesn’t automatically make it fun.
  2. “Write your entire app in pure JavaScript.”
    • This goes for any node framework.
  3. “Just write your templates. They automatically update when data in the database changes. No more boilerplate redraw code to write.”
    • I don’t face this issue often because I don’t change my database schema often.
  4. “No more loading your data from REST endpoints.”
    • I thought REST was cool because of standardization, simplicity, portability, etc. Is a separation of concerns not a concern?
  5. “Latency compensation. When a user makes a change, their screen updates immediately.”
  6. “Hot Code Pushes. Update your app while users are connected without disturbing them.”
    • I’m not sure what the big bad alternative to this is. Wouldn’t users always get the latest the next time they load the site?
  7. “Sensitive code runs in a privileged environment.”
    • Who’s writing sensitive code and then sending it to the client?
  8. “Fully self-contained application bundles.”
    • Neat, but I don’t know how unique to meteor it is. Seems like node and npm make this pretty easy.
  9. “You can connect anything to Meteor… ” “Just implement the simple DDP protocol.”
    • Meteor wrote an entirely new protocol to connect with other software? What’s wrong with HTTP?
  10. “Meteor’s Smart Packages are actually little programs that can inject code into the client or the server… ”
    • Sounds interesting, but again it’s not solving a problem I typically have.
  11. “Data on the Wire. Don’t send HTML over the network. Send data and let the client decide how to render it.”
    • Sending HTML over the network is how the web was designed to work, and I haven’t figured out what Meteor gains by throwing this out.

See you next year for the inevitable retraction post.

Getting the Vagrant ‘ssh command’ to Work on Windows

SPOILER: It works fine out of the box if you have git’s bin directory in your path.

I was chatting recently with friend of mine, David Jacoby, and he mentioned a desire to purchase a Mac for development. Since he already had a Windows machine, I asked him why.

(A few years ago, I would have simply nodded in agreement. But Windows has been doing a great job lately of putting open source development tech onto their OS. Either by baking it into Visual Studio or paying the open source developers to port it like they did with node.)

He tells me there’s some difficulty when you mix ssh, vagrant, and windows (or at least, there was a difficulty).

I was planning to use vagrant, so I needed to find out for myself. First off I googled vagrant putty ssh, and then read the linked stack overflow question.

Supposedly it’s been fixed for 10 months. But it was unclear to me if the vagrant ssh on windows fix works with putty or with the ssh command installed with git.

Since I’d have to test to be sure anyway, I installed vagrant and virtual box.

I followed the Vagrant Getting Started instructions:

$ vagrant init precise32 http://files.vagrantup.com/precise32.box
$ vagrant up

Vagrant up returned an error, but it was a very helpful error that told me to put the VBoxManage.exe binary on my path. I found it in C:\Program Files\Oracle\VirtualBox, added that to my path, and ran vagrant up again.

And away it went!

Bringing machine ‘default’ up with ‘virtualbox’ provider…
[default] Box ‘precise32’ was not found. Fetching box from specified URL for
the provider ‘virtualbox’. Note that if the URL does not have
a box for this provider, you should interrupt Vagrant now and add
the box yourself. Otherwise Vagrant will attempt to download the
full box prior to discovering this error.
Downloading or copying the box…
Extracting box…ate: 5401k/s, Estimated time remaining: –:–:–)
Successfully added box ‘precise32’ with provider ‘virtualbox’!
[default] Importing base box ‘precise32’…
[default] Matching MAC address for NAT networking…
[default] Setting the name of the VM…
[default] Clearing any previously set forwarded ports…
[default] Creating shared folders metadata…
[default] Clearing any previously set network interfaces…
[default] Preparing network interfaces based on configuration…
[default] Forwarding ports…
[default] — 22 => 2222 (adapter 1)
[default] Booting VM…
[default] Waiting for machine to boot. This may take a few minutes…
[default] Machine booted and ready!
[default] The guest additions on this VM do not match the installed version of
VirtualBox! In most cases this is fine, but in rare cases it can
cause things such as shared folders to not work properly. If you see
shared folder errors, please update the guest additions within the
virtual machine and reload your VM.
Guest Additions Version: 4.2.0
VirtualBox Version: 4.3
[default] Mounting shared folders…
[default] — /vagrant

Now for the moment of truth, I type in vagrant ssh and got:

Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic-pae i686)
 * Documentation: https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Fri Sep 14 06:22:31 2012 from 10.0.2.2
vagrant@precise32:~$

So it works great. The only question I have left is, “Is this relying on the SSH command that’s already on my machine? If so, did it come with PowerShell, installing Git on my path, posh-git, Github for Windows, or some forgotten tool?

Update: In the comments, Dave Jacoby posted a command to check your path: ($env:Path).Replace(‘;’,”`n”). We determined that having Git’s bin directory in your path is required.

LunaMetrics, Jonathan Weber – Google Analytics Auto Event Tracking

Google Tag Manager Auto Event Tracking.

Good guide, but you’ll want to walk through the steps in Google Analytics as you read it to fully understand.

This feature isn’t a simple as advertised, it requires some knowledge of HTML and programmatic thinking. The steps it takes are close to what I’d do as a developer to setup event tracking in javascript anyway. It’s basically an interface for the more common options.

Best Practices for Linking To and Opening PDFs on your Website

Every now and then I get a request to change the behavior of the links to PDFs on our website. Currently we follow Jakob Nielsen’s recommendation on the issue:

… prevent the browser from opening the document in the first place. Instead, offer users the choice to save the file on their harddisk or to open it in its native application (Adobe Reader for PDF, PowerPoint for slides, etc.).

It’s good advice, but some people find it more convenient when PDFs open within a new browser window.

I explain that this can be problematic for a handful of reasons.

  • It only works if the user has a PDF reader installed and plugged into their browser.
  • Opening a new window when clicking a link is contrary to best practices and accessibility.
  • Navigating back to the webpage is difficult as there is no navigation in the PDF.
  • Many users actually want to download the PDF (and we’ve gotten this specific complaint multiple times in the past).
  • Opening a downloaded PDF is very easy. IE and Firefox both give you an “Open” option, and Chrome puts the download a convenient click away.

Then I offer some links for further reading.

When I dig a little deeper, it turns out that the stakeholder just wants their content to be as accessible as possible. I share that goal, and suggest the best way to do that is to convert the content into a web page. It makes the content more accessible, reusable, responsive, and more easily indexed by search engines.

Of course, converting it takes some additional effort. If the content is valuable enough to change the way links are handled across the site, then it’s valuable enough to put into HTML. And if there’s no time to do it by hand, Google offers a free document viewer that displays the PDF as HTML. Or if it’s an infographic, export it as an image file and put that on a web page.

Failed development processes for the 3 person website team

Things that have been working well for us:

  • Publishing code live weekly or every other week as it’s ready.
  • Using JIRA to store bugs, plans, and good ideas.

Problems we’ve faced:

  • Starting any project that takes more than a day or two for a single developer, as priorities can change week to week.
  • Any estimate on a project more than a few hours is always off by days.

What I want:

  • To have a few fixed things to work on each week so I can complete them, publish them, and then feel good about it.
  • To get better at estimation so we can start planning ahead more than a few days.
  • Keep stakeholders in the loop with developers.
  • A process that lightly sits over current methods of issue tracking.
  • Smoothly handle the inevitable emergencies that come through for the website.

What we tried first, starting in February 2013:

  • Took all our old jira tickets and put them in an “In Queue” bucket, aka “backlog.”
    • in JIRA terms, bucket started out as a “Fix Version” but eventually became a “component.”
  • We’d review the backlog, pull some things out, add rough estimates to them, and attempt to put together about 2-3 weeks worth of work.
  • We’d create a bucket for that group, (started out as the JIRA field “Fix Version” and eventually became the JIRA field  “Sprint”)
    • We named it like this: *Primary Goal* due on *Month* *Date*
    • Examples: “Fix Browse Support due Feb 28” and “UVC Updates due Feb 19”
    • Some things grouped up organically (like a bunch of SEO fixes) and so we put those into their own buckets (Fix Version, later Epic) but didn’t put a due date on them.
  • We went through all our old groups that didn’t fit this convention and emptied them into the backlog or closed them.
  • When emergencies came up, we intended to create a ticket, set it to blocker, and move it into the currently active sprint and move something lower priority out.
    • We repeatedly missed our due dates because of my poor estimation capabilities and because no one bothered to move items out when emergencies were added in.
  • The idea was we’d launch whatever we had on the due date, and move the remaining items into the backlog and re-assess priorities for the next sprint.
    • Not a bad idea, but we were consistently getting to less than half of the planned items. This lead to feelings of failure and disappointment.
  • Each sprint will have its own branch. Current sprint branch will be merged into staging when it’s complete.
    • This is at odds with publishing mid-sprint, and since the items in sprints aren’t related this means one change could be waiting on a completely unrelated request.
    • It’s also at odds with the successful idea of “feature branches” and the other successful idea of “commit often.”

What we’re starting to do different now, in August 2013.

  • Shortened the intended length of the “sprint” to about a week because it’s easier to estimate that and semi-emergencies can just be put off until the next sprint.
  • Stop setting due dates before work even started.
  • Name them after the sprint start date and the most important feature or goal of the sprint.
    • Like this: “2013-08-20 Web RTC Sprint”
  • The idea is you work on the sprint until it’s done. If something is taking a lot longer than planned, break it up and put what remains in the backlog.
  • Hopefully this will lead to feelings of happiness as sprints are completed, improve our estimating skills, and have something to publish every week or so.
  • Actually take something out if we move an emergency into the sprint. Designate this as a specific person’s job.
  • Use feature branches and feature toggles. Don’t base the branches on the sprint.

Some practices that have been working well for months:

  • Set the primary stakeholder as the reporter or watcher for as many of the tickets as is practical. This way when one of our stakeholders forces an emergency into the current sprint, the other stakeholder sees us bump something to make room for it. This transparency means bumping something is easy with a good reason or a real emergency.
  • As tickets are completed, commit them to staging so the stakeholder can see the change on staging and approve or suggest changes.
  • Tester, Stakeholder, or other developer will close tickets after confirming them, or re-open if there is a problem. The assignee will not close their own ticket unless it’s trivial or they are the primary stakeholder.
  • Prod updates should always happen at the end of sprints and encouraged mid-sprint.

Improvements to the publishing process I made since February:

  • Production server should be made not to pull bin files from staging anymore. They should be built on production or a build server.
  • Setup a test server (besides staging) that can build its own bin files from a repo holding code not necessarily ready for production.
  • Staging stays on the master branch at all times and is the last stop before code goes live.
  • Before all this we had to both test on staging, but also clear out any non-prod-ready code before a launch so IT could copy the bin files from staging. Oh and I was building all the DLL files on my development machine and copying them via Remote Desktop to staging.

picking a server-side language

I made my girlfriend a one-page website for Valentine’s day. It’s cheesy and I won’t link to it, but it’s basically a slideshow of our photos with dates and some commentary. It’s pure html/css/javascript right now. The next step is adding a way to authenticate users and adding an interface for them to upload images and commentary. Which means I now have to decide which server-side language to use. I’m listing some of my options below to help myself decide.

  • php – can host on inexpensive web services & my domain is already setup on one. PHP experience never hurts, but it hasn’t been a focus for me like c# and javascript. However I could make a wordpress plugin out of this and learn a lot more about wordpress in the process.
  • c# – can host freely on appharbor.com but a custom domain costs $10 a month. It would be easiest to code in c# since i’ve done similar things before and my dev environment is already running. Maybe I could just add a cname or rewrite to the appharbor sub domain instead of paying for the custom domain.
  • python – can host freely on google app engine, but a custom domain requires setting up google apps account for it. Free, but complicates things. Python is an excellent language and easy to learn, but not ideal for me since I can’t use it at the office and it’s not often called for when doing contracts.
  • node.js – can host freely on heroku, and custom domains are free too. but runs version 4.7 by default. upping it to 6.2 could be a pain, judging from this post by Pat Patterson. I have zero experience with node, but I’m very interested in it. it will have far fewer libraries for it since it’s so new, but I may have already found one for authentication.

Update (3/6/2012)

I decided on node and got the current version of the my app migrated into it in about an hour thanks to express and jade. Luckily, I just found out that Heroku now supports many versions of node and npm, up to the latest. You just need to specify the node version you want in package.json.

I tried to get it working initially with iisnode and WebMatrix. But I wasn’t able to figure out how to get express working with it. Come to find out, it’s actually an open bug. iisnode is more than I need anyway for this simple app.

What happens to your old pages when you create a new website?

I’m not here to teach, but I wrote this for a web marketing blog at work that never went live. Didn’t want it to go to waste.

Thinking about a website upgrade? Is it on a new system with all new pages? Is anyone thinking about the links to the old pages? You’ll want to make sure you can identify and redirect the most important ones for the sake of your users and your Google rankings.

There will be various ways to get this list depending on your setup, but one good way is to get a report from your analytics tool (Google Analytics is a popular example). Most tools can provide a list of the most commonly visited URLs over a certain date range. Set that range to a year or so and you’ll have a hefty list of your most important pages. Next you’ll want to use your analytics software’s export feature (most will have this capability) to save the list into a text or excel format. If you don’t see an Excel specific format, look for an export format called Comma Separated Values (CSV)  which can be opened by Excel.

While your website may have had many more working links, discovering them and redirecting them to appropriate pages on the new site will offer diminishing returns due to their light use.

Now that you’ve got the list of URLs you want to keep alive, you’ll want to identify the best matches for them on the new site. The matches don’t have to be exact. For instance, we redirected all our old individual calendar events and news items to our new “News and Events” page. That said, avoid redirecting the entire list to the homepage of your new site if you can. This isn’t helpful for users and Google even explicitly recommends against this in the great video on this page about 301 and 302 redirects. That page even includes some links on how to actually code these changes on your web server.

If you have questions about any of the specifics just post in the comments!

posting a form to two places at once

I’ve spent a lot of time at work lately struggling with how to send a form submission to two different places at once.

I started just using the jQuery get method. Why not, right? That’s all I wanted to do, was make a one way request to a remote server. It worked fine as expected and I moved onto another project.

Unfortunately, I’m spoiled by jQuery’s usual similarity across the various browsers, and I didn’t test for the data going through in any other browsers. It turns out that while Chrome’s behavior is according to spec, it’s still not something the other browsers permit.

After some research, I added $.getJSON, which still wouldn’t work unless you add ?callback into the target url.

The wikipedia page for jsonp explains how it’s done on the backend best. Basically, it dynamically adds a script tag with “src=” set to the remote server and the data you want to send in the query string.

Then we ran into another weird problem. It would work in Firefox while debugging, but not in practice. Which means Firefox was executing the post command in the form before letting the jQuery finish the getJSON command. Since we needed to pop up an alert for the user anyway, I just added that after getJSON and it worked flawlessly. But in other scenarios there’s still the open question, what’s the best way to give it enough time to finish?

p.s.
Using JSONP to send and receive any kind of sensitive data is dangerous as any malicious site could make the same request.

p.s.s
I wonder if there isn’t some simpler way to do this but the alternatives sound just as messy. i.e. dynamically opening an iframe with display set to none.

Comments Were Not Working

Comments weren’t working on this blog for some reason.

Problem

When trying to post you’d be redirected to the post’s URL followed by /comment-page/#comment-. But, it should look more like this /comment-page-1/#comment-74. I’d also get an email about a new comment, but all the normal fields would be blank, i.e. Author, Email, URL, WHOIS, and Comment.

Testing:

Googling the problem I found this issue is usually caused by bad permalink structures. So I made sure the htaccess file is writable and only contains the automatically generated wordpress lines. I even temporarily removed my root folder htaccess file for a minute but that didn’t fix the comments (though it did have all the folders display indexes, 1990s style.)

Since I do use a custom link structure (/%year%/%postname%/) I turned it off and still had the same problem. With it off I got this URL:
http://www.robertpate.net/blog/?p=200#comment-

And that URL did not 404! But there was still no new comment. Looking at it I realized that the comment should have an ID on the end, but didn’t. So it wasn’t a permalinks problem, it was a “comments aren’t getting saved to the database” problem.

I updated all the wordpress files and switched to the new theme, just in case. But since that didn’t solve anything I logged into cPanel and was able to run a check and repair on the database. It threw some nasty errors, and then the repair fixed them all in the span of about 60 seconds.

Today I Learned: 1. Databases can trip and hurt themselves. 2. Timeout errors while uploading via FTP may be due to being on a virtual machine.

SEO Testing Update

Still ranked 6. Google has picked up my new blog title and my previous post. The current list looks like this: RP’s (robert pate’s) on the White Pages, an RP on linkedIn, an RP on Facebook, the RP wikipedia entry, the RP Cal State page (cstv), and then me. If you’re testing your own stuff don’t forget to clear your cookies or just open a private browser session. Google will customize your results otherwise.

Next question: Should I 301 or 302 from robertpate.net to robertpate.net/blog? hanselman.com uses a 301, but I haven’t run across a lot of other blogs doing the same. Currently I’m using a 302.

I think I’ll test that next. While I worry that doing a 301 redirect may make my root domain less potent if I ever added content there, I will do it anyway, for science!

Comparison of Free Software Licenses to Creative Commons

Back when I was dabbling in writing, I familiarized myself with the various Creative Commons Licenses. Now as a programmer I’m familiarizing myself with the various Free Software Licenses.

Unfortunately, the official list is worthless to someone who doesn’t already understand the differences between the basic types. In googling the issue, I found a few helpful resources: a “quick ref license chooser” which is a great idea but didn’t help this noob a whole lot, and this video from redhat entitled “Open source software licenses explained.” The video was the biggest help and is worth the 6 minutes it takes to watch.

But what I really wanted was something as simple as creative commons. But I couldn’t find one so I drew up this comparison. The licenses are obviously not the same. Nor are they compatible in many cases. This is only a loose comparison. But I’m hoping that this should still increase understanding for those coming from FSF to CC or vica versa.

  1. Attribution-Only – “permissive / non-protective” licenses, i.e. FreeBSD
  2. Attribution-ShareAlike – “copyleft / protective” licenses, i.e. GPL
  3. Attribution-NoDerivs – Here you would keep the source proprietary, but distribute the installer as freeware.

Each of these CC licenses also has a NonCommerical variant that prevents commercial use, but I couldn’t find a parallel to it in free software licenses. Why that is could probably be a whole separate blog post.

For further reading, check out this David Wheeler post on why you should use GPL for your software, the BSD licenses Wikipedia entry, the GNU instructions on how to include GPL in your project, and CC faq entry for why you can’t use a creative commons license on software.

SEO for This Blog and Domain

I’ve moved my blog around a lot over the years. I did this again recently because I realized I was getting 9th place in search results for my name. As a web admin I should be ranking higher simply because I know how to setup a site for good SEO.

So I decided to fix my site up and at the same time test a lot of the best practices.

Since i’m targeting Robert Pate as keywords, i changed the title of my blog from robertpateii to Robert Pate II. I’m not sure how google treats spaces, but I figured an exact match would be better than a psuedo match. I moved back to robertpate.net instead of robertpateii.com for the same reason.

Also .net made more sense to me, subjectively. I’m not a commercial enterprise. I’m not a networking company either, but at least it’s the right industry for me.

I also made sure I was redirecting everything to www so that there isn’t any duplicate content. Redirected all the old urls from when the blog was on robertpateii.com, and finally updated wordpress manually.

Oh and I posted this so that there’s some fresh content. In a week or two i’ll update this with my new google rank.

June 11th, 2011 Update:
6 days later and i’m now ranked 6 for Robert Pate, but google hasn’t picked up my new blog title yet. The ones in front of me are whitepages, linkedin, wikipedia, cstv.com, and justia.com. That’s not bad for having little content and no one linking to this site. There are 2 obvious next steps: go through all my online spaces and link to my own domain and adding useful content.

July 8th, 2011 Update:
See the SEO Update Post.

Hello Dolly Plugin with Dance Commander Lyrics

Do you use wordpress too? Are you more interested in awesome than in hope?

Then take my Dance Commander Plugin, which replaces the “Hello, Dolly” lyrics with all the lyrics from “Dance Commander” by Electric Six.

This is not just a plugin, it symbolizes the awesomeness and enthusiasm of an entire generation summed up in six words sung most famously by Dick Valentine: You Must Obey the Dance Commander. When activated you will randomly see a lyric from “Dance Commander” in the upper middle of your admin screen on every page. It can be active at the same time as the original Hello Dolly plugin.

Update on January 14th, 2011: I’ve uploaded a new copy of this plugin that changes the styling back to the original settings. This means you cannot have both on at the same time, but I think it looks better. Here: Dance Commander Plugin – Original Styling

Time Warner, Google TV, and the Internets

You probably didn’t see or don’t even remember the little tiff in august that Time Warner had with ESPN/ABC and Disney.

Or the one in december 2009 with FOX.

That’s nice that they kissed and made up, but it’s probably for the last time. The whole model should be, will be, shifting as the internet gets faster and the cable networks wise up.

TWC is just a middle man when it comes to television content. And in this age of internets, middle men are going out the window. Consumers and producers both benefit from direct exchanges, but these direct exchanges are traditionally inconvenient to arrange for both sides. Thus the need for dedicated middle men. The internet opens up distribution by making these exchanges easy to find and execute.

Time Warner, Google TV should scare you. Because now online options like Amazon Video on Demand, iTunes show rentals, streaming netfilx, and hulu.com will all suddenly be available on your TV.

TWC should get ahead of the curve and focus on making their internet faster and cheaper. Let the companies that actually produce the content sell it directly over the internet to the consumer. They’ll have to do this in order to get ahead of google, verizon, and ATT fiber networks and even the growing 4G services (i.e. Clear, Sprint, and now T-Mobile kind of ). Otherwise in 10 years Time Warner will find itself with a shrinking percentage of the ISP market and a dying cable television model.

Unless . . . the internet doesn’t get faster. If you dig into the ESPN/Disney agreement, they say “Subscribers will also have unprecedented digital access to online content and expanded Video On Demand services.” But that digital access is now being “authenticated.”[1] It appears free, but you’re really paying it with your cable fees. And if net neutrality gets destroyed, these authenticated services will run fast while all the other competing online options will run slow. That’s why the cable companies, the middle men, are so hot for bringing down net neutrality.

I’m hoping Google TV works with all of these services – both free and authenticated. I’m sure Apple won’t play nice since they’re working on a competing product, but that’s Apple for you. As long as we get a market place for video that’s open and competitive with multiple providers, the consumers win.


1. Access to a new authenticated service, which will give Time Warner Cable and Bright House Networks subscribers the opportunity to watch the linear networks ESPN, ESPN2 and ESPNU through their broadband services as well as mobile Internet devices, like an iPad. Details on the launch will be forthcoming.

Multilingual Dialogue on the Web

I’ve been working on a user forum for my company. The solution we’re using has built in translations of the interface, but the translation of the user-generated content is necessarily a completely separate project.

I haven’t seen many other sites translate it either. Judging from the forums I’ve used, it’s because this tech is beyond the current scope of most forums’ capabilities. But happily there are a few neat things being done these days, such as Ted Talks allowing open translation of their talks and Meedan enabling multi-lingual dialogue.

The Meedan article is especially interesting. They use automatic machine translation on every comment, and allow open editing by translators. It’s my hope that this kind of crowd sourcing and  good machine translation can out-pace the compartmentalization of the internet caused by language barriers.

Such implementations are not free or easy to implement, even if you’re leveraging the crowd. But English is not going to remain the common language of the internet forever. Does the possibility of three or four different internets worry you? Have you seen other websites out there handling this well? I bet that someone, somewhere, is hard at work on an open-source project to solve this problem.

Do Androids Dream of Electric Sheep

Enjoyed the movie but loved the book.  It carries a lot more depth, asks a lot more moral questions of the reader, and develops the plot in a completely different manner and direction.

While I don’t think Deckard’s version of the earth will ever come to pass,  it’s still a relevant book for all the questions it asks the reader about what defines a person/soul.

It’s also funny to see science fiction age, i.e. Deckard reading smudged carbon copies in a hover car, and using a pay video phone because no one has mobile phones.

Use QR codes to link RL to DBs via the WWW

here’s a quick idea. First, QR codes are similar to barcodes but are square, hold more information, and are easily scanned with a digital camera lens like the one in your phone. No cumbersome laser scanner needed.

Just download the QR Scanner application that’s commonplace on Japanese phones and catching on here in the states.

is capable of reading a QR i.e. three stickers on a rental movie box. One for good. One for bad. One for ok.

html formatting

Internet-wise, crossposting first meant posting  the same message across many different usenet groups. For us and all the others in the online job searching industry, cross-posting means posting one job on another job board. This can happen time and time again so that by the time a candidate sees a job, he’s got to jump through a bunch of different boards (somtimes registering) to get back to the original that will actually let him apply.

We’re one of those original sites. Recruiters are pesonally logged into our network looking at our candidates’ profiles. But we’re also new to the scene, so in order to get our jobs out there we have to do a lot of crossposting. And it’s not free, of course. Crossposting is how a large subset of the  industry makes its money. There’s even a handy dandy site out there that will manage a lot of your crossposting for you as a one stop shop.

What’s neat about most sites (including the one stop shop) is they all take alot of basic html tags like lists, bold, line breaks, and paragraphs. We, however, don’t support html formatting within our system (yet anyway. there are some security issues, but I think eventually we can work around them.) So in order to crosspost our 60 or so jobs a week, we have to go in first and mark them up with html.

Now I fancy myself as an HTML/CSS hobbyist,  but I think one of the little tasks I do that drives me the craziest is formatting html en masse for all the jobs we’re crossposting.  I’ll do a bold tag here, an unordered list tag here, a bunch of lists tags to replace bullets there, and then I’ll do it another seventy-nine times.

Fortuntely, though, there’s a free tool perfect for this job called HTML Kit. While it has a bunch of features that are way beyond my skill level, it also includes the capability to bind a combination of keys to insert text, specifically tags. So I can simply highlight that list and hit F7 and it’s wraped with unordered list tags. Very handy.

Well, it’s 4:37pm on thanksgiving eve. I’m going to wrap up and get out of here, as soon as I figure out who’s going to check up on the india team while i’m gone.

Scaling Up at ItzBig

My career at itzbig started on 8-15-07.

Actually, it started about three or four weeks prior. I’ll just say that in the job search, no means no, but silence means maybe! This is especially true in small to medium size companies.

My initial impression was one of awe. I was really going to work for a true blue startup company. Throughout college I’d always lived in a quiet horror of going into the machine of a big multinational corporation. I knew it could happen because I wouldn’t say no to one. I like to give everyone a chance, and I knew my dislike of them was a little irrational. (Just a little. I’m sure at some point I’ll start a rant on how giant corporations break the free economy, and thus freedom.)

But I had lucked out! By the grace of God, and the McCombs Business School Alumni Job Board, I’d found my way into my own little start up company. Just in time, too. Things started gearing up even faster after I arrived.

Not quite right after I arrived, though. There was definitely a calm before the storm. I was staying busy learning all the tasks that my predecessor had been taking care of, and adding about fifteen or twenty jobs a week into the system.

Then after about a month, on a Thursday, my boss tells me he’s got about 300 jobs queued up in his inbox. I guess the sales guys started trying or something, but my jaw dropped. I’m pretty sure he cackled.

Maybe he didn’t, but regardless, by Tuesday we had three temps from one of the local temp services that use our network.

It was kind of rocky there at first. They sent us some people interested exclusively in data entry, when the job actually requires about 80% data analysis and the rest as data entry. So after a few days, getting caught up on the entry, we basically had 2 people who could only do the five minute job data entry job waiting on 2 people to do the thirty minute data analysis job. And that 2nd person was me, who often gets called away from analysis for tech support or other details.

But one of the data entry only people was called away to her true passion, flower arranging. And they replaced her with another data entry only person. I kept her busy with data entry for most of the week, but the last few days I had her try the analysis. She kept getting real frustrated with it and didn’t seem to enjoy it at all. She didn’t come back on Monday, though I think that might have been of her own will.

Next we got Theresa, who immediately took to the data Analysis along with Martina, who’d been doing it since the third day. With them focusing on data analysis and Anna dedicated to data entry, we finally got production running steady. I still jump in to pick up any slack, but mostly now my time with production is spent breezing over the job files sent to me by Gino for jobs we don’t support and for nasty surprises.

Tech support also keeps me busy, since I’m the single tier 1 support operator. It comes somewhat naturally, though, since I’ve always made myself available as a helper for the online games I’ve played in the past.

Gino: Gino’s been a great recent addition to our team. He took over for my boss the task of managing the incoming jobs from success managers and passing them off to me. He’s the contact point for all the sales team and success managers, so I can focus on working with the temps to get everything into the system. Since my boss was doing it before him, it frees up a big chunk of his time which lately he seems to be spending on developing automated accurate reporting tools between our SQL server and Excel.

And most recently (today) we hired an intern on from UT. Today’s her first day, so I gave her my “Production Walkthrough” and let her rip. We’ll see how it stands up to real use by someone new.