Why I Use Aptana

Most of the developers I know use one of the following IDEs:

  • Sublime Text — The latest and greatest for many front-end devs
  • vim or MacVim — A staple for back-end engineers, among others
  • TextMate — For folks who were down with Macs since Day One
  • Emacs — Do you want to use an editor from the 70s but feel like vim is too mainstream? Emacs has you covered.

I am an outlier among my peers in that I use Aptana. At least for now. Before you judge me, please continue reading.

For those who aren’t familiar with Aptana, it is a fork of the Eclipse project that combines Eclipse’s formidable plug-in base with a lot of polish added by the Aptana team. I’ll admit that part of my reason for continuing to use it is that I’m simply too lazy to switch to Sublime Text (which has more momentum behind it) or vim (which would let me use the same local IDE as I use for remote editing). However, the simple fact that Aptana so thoroughly meets my needs means I don’t have any pressing reason to switch.

The main reason I love it is that it brings me down to needing only two open windows while I’m working: Aptana for code edits and commits and a terminal window to view the results of automated unit tests (plus a third window when I need an actual browser). The git integration in Aptana is second-to-none and I can visually inspect and stage my commits without leaving my editor. It does however lack support for staging selective hunks of a file, or generating a new feature branch — two tasks in which I still need to turn to terminal. Aptana also has seamless FTP support which makes pushing changes to shared hosts a breeze. There’s also Heroku deployment that I’ve been meaning to play around with.

The configuration options can be a bit daunting because so much of the editor can be tweaked to your liking, but after using it for awhile, you’re generally able to find the settings you need. In fact, there’s a lot of cool stuff hidden in the configuration options, like executing PHP and other scripts direct from Aptana, as well as terminal and web browser integrations that can actually make Aptana a single-window solution. I haven’t played with these too much because I do prefer to have these applications open in separate windows.

For me, it ultimately comes down to the fact that I’m comfortable with Aptana, and for the stuff I do 90% of the time, using it is second-nature. The core things you need an IDE for (editing, find/replace, easy navigation through large amounts of code, quick access to docs, version control) Aptana does incredibly well.

It does have its failings, however. For awhile, its stability was so poor that I affectionately referred to it as Craptana. With the release of version 3.0, however, stability is generally rock solid (apart from the occasional, interminable “Building Workspace” loading message).

My main beef with Aptana at this point is the bizarre window management scheme. Aptana lets you detach windows, move them around and switch between some sort of “tiled mode” and “overlay mode”. The problem is that I don’t want any of this functionality and when I accidentally detach something, I can’t always figure out how to put it back. This is best illustrated with some screenshots.

Here is how Aptana looks when it’s behaving itself:


Web_-_drlongghost_app_webroot_js_validanguage_api_js_-_Aptana_Studio_3_-__Users_dand_git

 

But if I start playing with the windows and moving shit around, I eventually end up with this:


Web_-_drlongghost_app_webroot_js_validanguage_api_js_-_Aptana_Studio_3_-__Users_dand_git-15

 

Every time I get fed up with Aptana and am tempted to make the switch to Sublime Text, it seems to sense my frustration and begins behaving itself. It must have learned that trick from my daughter.

When I’m working, I prefer to focus my energy on the task at hand and not obsess over refinements to my tool chain (sometimes to a fault). This is the main reason it’s 2014 and I’m still using Aptana — it stays out of my way, does what I need it to do and let’s me move quickly between tasks.

Updates Coming Soon

I ignored this blog for over a year and in the intervening period, I neglected to keep my WordPress install up-to-date.

You can guess what happened next: At some point, WordPress was responsible for my site getting hacked and someone uploaded a PayPal phishing site and started sending it traffic. I noticed when major DNS providers began blacklisting my web domain.

Luckily, that’s all sorted and my WordPress install is now only 2 updates behind. Let’s hope none of them are security-related. I should probably get on that…

Anyway, one of my resolutions for 2014 is to start blogging again and begin adding tutorials and web casts to my site, so check back soon because DrLongGhost.com is arising from the grave once again.

Colonel Titus’s Texas Longhorn Chili

This past weekend I made my old boss Arthur’s recipe for chili. I figured I’d put the recipe up online in case anyone else was interested in checking it out. So without further ado, let’s jump right in with the list of ingredients:

Chili Ingredients (See below for the recipe)

  • 2T Olive Oil
  • 3 lbs. of meat (I prefer 2 lbs. of stew beef and 1lb. of ground beef or pork)
    You can use any combination of meat here (all ground, all stew beef, 50/50, etc.). Personally, I like 2lbs stew beef and 1lb ground beef.
  • 4T Olive Oil
  • 4 cups chopped onion
  • 8 cloves of garlic, minced or crushed in a press
  • 9 fresh jalapenos: stemmed, seeded and minced.
    Remove the seeds and veins to lessen the heat. If you want to bring the heat down even more, you can use only 6 or 7 jalapenos. But don’t skimp on the dried chilis!
  • Dried chilis: 6 New Mexican, 5 Ancho, 4 Mulato, 4 Cascabel, 3 Pasilla, 3 Guajillo
    It can be difficult to find all the different varieties of chili peppers called for by the recipe. You can substitute chipotle or negro peppers. I generally ensure I have at least 4 different types of the ones listed as the threshold to make the recipe and then adjust the quantities as needed.
  • A 1 kilogram can of italian imported whole tomatoes, crushed lightly with a potato masher. Alternately, you can substitute italian crushed tomatoes in a 800g can if that’s all you can find.
  • 2 cups of beef stock or 1 can of condensed beef broth
  • 4 cups of beet stock or 4 cups of beef broth made from equal parts of condensed beef broth and dark beer
  • 4t coarse salt or 2t table salt
  • 2T ground cumin (from toasted seeds if possible)
  • 2T dried oregano (preferably Mexican)
  • 2t cayenne pepper
  • (Optional) 1 -2 oz unsweetend dark baking chocolate
  • Cooked pinto beans or other beans (See recipe below for Colonel Titus’s Black Beans)
    The black beans used in the chili are a separate recipe and are listed below. Arthur had told me that he stored the beans separate from the chili, since when mixed they tend to sour. I also tend to store them separate but for a different reason — not everyone likes beans in their chili, so this way they can be added as needed, to suit each person’s taste.
  • Serve with:
    • Crackers
    • Sour Cream
    • Shredded cheese
    • Minced onions or jalapenos

Black Bean Ingredients (See below for the recipe)

  • 1 lb black beans, picked over, rinsed and soaked in 10 cups of cold water for 8 – 12 hours or overnight
  • 2 onions — one peeled and halved, the other chopped
  • 2 large green peppers — one peeled and halved, the other chopped
  • 6T olive oil
  • 3 medium cloves of garlic, minced or crushed
  • 1/2 t cayenne pepper
  • 1 t ground cumin
  • 1/2 t ground, toasted coriander
  • 1/2 t ground bay or 1 whole bay leaf
  • 1/2 t ground oregano
  • 1/2 t ground black pepper
  • 1T coarse salt (or 1.5 t table salt)
  • 1/2 of a 6oz can of tomato paste (preferably Italian)
  • 1T red wine vinegar

Chili Recipe

First, you want to prepare the dried chilis. One thing to note when handling dried chilis: Your hands can and will be coated in capsaicin (the active ingredient in chilis that give them their heat). If you then touch your eyes without thinking, you are in for a bad time. For this reason, I always opt to wear disposable latex or neoprene gloves for the first part of the recipe. That way, I can remove the gloves and wash my hands once and I’m good. If you aren’t wearing gloves, you’ll want wash your hands very, very thoroughly. These chilis will hurt you.

Anyway, rinse the chilis in water then arrange them on a couple cookie sheets and toast them in the oven at 300 degrees for about 5 to 10 minutes to help bring out the flavor. You should be able to smell them in the oven when they’re ready to come out.

Dried chilis

Next, you will break apart the chili peppers by hand, while wearing gloves. Place the skin of the chilis in a bowl and discard the seeds and the veins from the inside guts of the chilis. This lets you get a lot of the flavor from the chilis (the skin) while limiting the amount of heat (seeds and veins). When you’re done, you will have a nice bowl of dried chili skins:

chili skins

You can discard your gloves now. The next step is to reconstitute the chilis. Prepare a sauce pan with the dried chilis, 2 cups of stock or beef broth/beer mixture and the crushed tomatoes. Bring the mixture to a boil then immediately turn it down to a bare simmer and simmer for 15 minutes. Then remove this mixture from the stove top and let sit, covered, for 15 minutes.

Next, you will transfer this mixture to a blender and puree it. Most likely, you will want to fill up the blender 2/3 to the top to leave plenty of room at the top and blend it in two batches. Don’t try to fit it all in 1 batch or it will overflow. When you’re done, give it a taste; the flavor of the chilis is amazing:

On to the meat. The last time I made this recipe, I used stew beef and ground beef from a local farmer’s market:

The trick with the stew beef is that you want to cut it into very, very small pieces — as small as your patience allows. I guess there is an element of personal preference here as some folks may like bigger chunks of meat in their chili. Personally, I like the texture of the combo of the ground beef with the tiny beef chunks:

Next, brown the meat in a skillet with 2T of olive oil. It doesn’t need to be well-done, medium is fine:

You’ll want to use a large stock pot for cooking the chili in. Set the meat aside and cook the onions, jalapenos and garlic with 4T of olive oil in the stock pot:

Once the vegetables are translucent, add the cooked meat, the chili/tomato mixture and the 4 cups of stock (or 2 cups broth/2 cups beer mix). Cook this for 2 hours or so on low heat. Then add the spices and cook another 2 -3 hours. If you are adding the unsweetend chocolate, add it an hour before you expect the chili to be done.

As you cook the chili over the course of the 4 – 5 hours, it will thicken and become less soup-like and get a much thicker consistency. If you find it has thickened too much and you want to cook it a little more, you can add more liquid in the form of water, beer or beef stock. Conversely, if you find when you are done that it isn’t thick enough for your liking, you can add some corn starch or masa harina dissolved in water to thicken it up.

Serve the chili with the following accompaniments: black beans (see below for the recipe), cheddar cheese, crackers and diced onions or jalapenos. Bon appetit:

Colonel T’s Black Beans Recipe

  1. Soaking water will be used for cooking. Beans should be cooked in a 5-qt pot. When ready to cook the beans, put the bay leaf, onion and pepper halves in pot, cover, bring to a boil, reduce immediately to a slow simmer and cook till tender but not mushy — 2.5 to 3.5 hours.
  2. When beans are cooked, remove and discard pepper and onion. Drain beans and reserve 2 1/4 cups of bean liquid. Put drained beans back in stew pot. Meanwhile, in a 2qt saucepan, heat 5T of olive oil, add chopped onions, peppers and garlic and cook until onions are translucent. When vegetables are soft, add the spices (except the salt). Stir and cook for 5 minutes then add the tomato paste and 1/4 cup of the reserved bean liquid. Mix well, bring back to a simmer and cook for 5 minutes.
  3. Dissolve the 1T coarse salt in the remaining bean liquid. Put 1 cup of the liquid in a blender or food processor, add the cooked vegetables and blend to a thick puree. Add this to the beans. Pour the remainder of the bean liquid in the blender to mix with the residue of the puree then add that to the beans.
  4. Stir well to combine the beans and puree. Put pot over heat, bring to a boil and reduce to simmer and cook, stirring constantly, until the mixture has thickened. Remove from heat and stir in the vinegar. Then drizzle the remaining 1T olive oil over the beans and let stand 15 minutes. Allow to cool then store covered in the fridge. Like stew and chili, the beans are better if used the next day by reheating with a bit of water. They continue to improve for a few days then decline after the 7th day.

Serving suggestion for the beans: Try them with 1 cup of the beans, 1T ketchup and cooked, small pasta.

I Finally Got iPhone to Ubuntu VPN Working with OpenSwan

UPDATED 12/31/11: Added “dpd” items to ipsec.conf and lines to ip-down to enable the iPhone to quickly reconnect without needing to restart ipsec manually.

I am now able to network my iPhone to my Ubuntu desktop with VPN so I can play mp3s off my home NAS drive.

Here is a brief description of what I loaded in my config files to get this working. I’m posting it online in the hope that it will be useful to some folks.

Here is a breakdown of my hardware: Lucid Lynx Ubuntu desktop (internal IP is 192.168.1.10), iPhone 3GS, iomega NAS drive (internal IP is 192.168.1.80), Linksys E3000 router.

Networking stuff: I disabled the firewall on my ubuntu box while I was trying to get this working, then came back and loaded the firewall rules later. I also ensured that my router is port forwarding TCP/UDP ports 500, 1701 and 4500 to my ubuntu box and VPN Passthrough is enabled.

First up, I installed Openswan 2.6.33 per the instructions in this thread. Apparently, to get openswan and an iPhone to play nice, you need to use a custom compiled version. Here are my config files (with my password and remote IP addresses modified, of course):

/etc/ipsec.conf

config setup
 nat_traversal=yes
 #charonstart=yes
 #plutostart=yes

conn PSK
 authby=secret
 forceencaps=yes
 pfs=no
 auto=add
 keyingtries=3
 dpdtimeout=60
 dpdaction=clear
 rekey=no
 left=192.168.1.10
 leftnexthop=%defaultroute
 leftprotoport=17/1701
 right=%any
 rightprotoport=17/%any
 rightsubnet=vhost:%priv,%no
 dpddelay=10
 dpdtimeout=10
 dpdaction=clear

include /etc/ipsec.d/l2tp-psk.conf

/etc/ipsec.d/l2tp-psk.conf

conn L2TP-PSK-NAT
 rightsubnet=vhost:%priv
 also=L2TP-PSK-noNAT

conn L2TP-PSK-noNAT
 #
 # PreSharedSecret needs to be specified in /etc/ipsec.secrets as
 # YourIPAddress     %any: "sharedsecret"
 authby=secret
 pfs=no
 auto=add
 keyingtries=3
 # we cannot rekey for %any, let client rekey
 rekey=no
 # Set ikelifetime and keylife to same defaults windows has
 ikelifetime=8h
 keylife=1h
 # l2tp-over-ipsec is transport mode
 type=transport
 #
 left=192.168.1.10
 #
 # For updated Windows 2000/XP clients,
 # to support old clients as well, use leftprotoport=17/%any
 leftprotoport=17/1701
 #
 # The remote user.
 #
 right=%any
 # Using the magic port of "0" means "any one single port". This is
 # a work around required for Apple OSX clients that use a randomly
 # high port, but propose "0" instead of their port.
 rightprotoport=17/%any
 dpddelay=10
 dpdtimeout=10
 dpdaction=clear

conn passthrough-for-non-l2tp
 type=passthrough
 left=192.168.1.10
 leftnexthop=192.168.1.1
 right=0.0.0.0
 rightsubnet=0.0.0.0/0
 auto=route

I am able to restart OpenSwan via /etc/init.d/ipsec restart. Next up are my configs for xl2tpd which I start with xl2tpd -c /etc/xl2tpd/xl2tpd.conf -s /etc/xl2tpd/l2tp-secrets -D

/etc/xl2tpd/xl2tpd.conf

[global]
debug network = yes
debug tunnel = yes
ipsec saref = no
listen-addr = 192.168.1.10

[lns default]
ip range = 10.42.42.17-10.42.42.31
local ip = 10.42.42.1
require chap = yes
refuse chap = no
refuse pap = no
require authentication = no
ppp debug = yes
pppoptfile = /etc/ppp/options.xl2tpd
length bit = yes

/etc/ppp/options.xl2tpd

ipcp-accept-local
ipcp-accept-remote
#ms-dns 192.168.1.10
noccp
auth
crtscts
idle 1800
mtu 1410
mru 1410
defaultroute
debug
lock
proxyarp
connect-delay 5000
ipcp-accept-local




/etc/ppp/ip-down

ADD THE FOLLOWING 3 LINES TO THE BOTTOM OF THE FILE:
dpddelay=10
dpdtimeout=10
dpdaction=clear


And finally the config on my iPhone:


I’m using the L2TP tab. I have my remote IP listed as the server; Account: dan; RSA SecurID: Off; password and secret both entered as my password; Send All Traffic: On; Proxy: Off.


That’s about it. Here’s some other links that proved useful while I was hashing my way through this:

Now in HD

Well, I finally made the leap to HD. I decided it was time when the component cable ports on my non-HD TV that I’ve had since 2001 started acting up and I had to start plugging my XBOX 360 into the regular composite port on the TV. Also, I’d tried to download Mad Men Season 3 and the only rips I could find on isohunt.com were in 720p and thus unwatcheable on my regular XBMC setup.

First, I upgraded the bedroom with a Vizio 26″ 720p TV and Samsung BD-P3600 Blu-Ray player. The viewing angles on this TV are pretty shitty so I had to bungee-cord it onto the riser so I could safely stick a pice of wood underneath it to tilt the whole thing forward so you can see a decent picture while you’re lying in bed. So, for $280, I’m not really that happy with the TV. The Blu-Ray player I also have mixed feelings about. It came with a wireless dongle so it can hook up to my network and play videos off my Iomega NAS drive, which is pretty cool. However, the interface for this is a little clunky. Hopefully, they’ll fix this in a future firmware upgrade. Also, it won’t save your position in a DivX or XVid file if you stop playback and want to restart from the same spot later.

Now, for the living room. I read a whole bunch of reviews and decided to get a 46″ Sony LCD from Best Buy. They were having a promotion where you’d save money if you buy a Sony TV with a PlayStation3. I did some math and figured out that if I went for that promotion and sold the PS3 on eBay for $250 or so, it would work out to a great price on the TV. The funny thing is that I was confused by the Sony model numbers and ended up buying the wrong one. The one I originally intended to buy was the Sony XBR8 which has an LED backlight and supposedly one of the best pictures around. Instead, I got the XBR9 which has a standard, non-LED backlight, but displays at 240hz. I got thrown off because the XBR8 is a 2008 model whereas the XBR9 is newer and the exact same price. If I could go back, I’d definitely get the XBR8 but once I realized my mistake, I had the TV all set up and was pretty happy with it, so it’s not a huge deal. I’ll just have to make sure I don’t make the same mistake when I get my new virtual-reality, 3-dimensional hologram TV in 2017.

When deciding on the Sony over a Samsung, one of the things I discounted was the Samsung TVs’ inclusion of DivX functionality in the TV itself. This is inarguably pretty cool, but at the end of the day, I was more concerned about straight-up picture quality, since you can always get DivX playback elsewhere. Speaking of which…

Now that I had my kick-ass TV, I needed to figure out how I could play HD DivX files. For non-HD content, I’d always used XBMC on my first-gen Xbox, which is in my opinion, the best media player ever available for non-HD content. Sadly, the first-gen Xbox CPU is too slow to handle decoding 720p or 1080p content, so it looks like it’s end may be drawing near.

At first, I considered the PlayStation 3. After all, I already had one that came bundled with my TV. But, the PS3 doesn’t play mkv files and unless Sony decides to support this in a future upgrade (and I’d say it’s 50/50 as to whether they ever will), the PS3’s usefullness as a media player is limited. There’s also the limitation that the PS3 uses Bluetooth and not infrared for its remote control, which prevents you using it with a universal remote unless you purchase an additional pricey adapter. So, I determined that I’ll end up selling the PS3 on ebay and look at other media players instead.

Which brings us to the Samsung BD-P3600. I went ahead and bought a second one for the living room because I figured if nothing else, I’d want a Blu-Ray player. But as I mentioned before, even though it is very capable as a HD DivX player (and can play mkv files), the interface is clunky. I need my media player to connect to both my iomega NAS drive and a Samba share running on my Ubuntu desktop. The BD-P3600 doesn’t save a list of your network locations, so you need to click “Search network” each time. And in my case, it won’t find either device! I need to go into Manual Search Mode and type the IP address of which of the two I want in order to access it. Way too much of a pain,

So I figured I’d continue using XBMC for playing standard-def content and when I had a high-def movie, I’d burn it to disc or thumb drive and play it on the BD-P3600. Then my friend Eric from work turned me on to the Western Digital WD TV Live. This thing is pretty sweet. It’s a tiny box (only 5″ x 4″ x 1.5″) and all it does is play HD Divx, Xvid and mpeg movies (and mp3s and photos). I bought one online from bestbuy.com and arranged for store pickup. Initially they gave me the WD TV instead of the WD TV Live. The former has no ethernet port and is intended to be plugged directly into a USB harddrive. After returning to the store to pick up the correct unit, I hooked it up and was playing my first video within 5 minutes.

The WD TV Live is not without its flaws. There is no “manual search” mode for finding a computer or harddrive on your network, so if it isn’t discovered automatically, you’re out of luck. In my case, it found my Iomega NAS drive right away. My Samba desktop share over my wireless did not show up, but after tweaking some Samba configs (listed at the end of this post), I got it to come up. When held up against the gold standard that is XBMC, the WD TV Live’s interface is simplistic and navigating around it with the included midget-sized remote feels chintzy and unfulfilling. However, once you actually start playing a video, who cares about how you got there. The video quality in standard def and in HD is great. It supports subtitle files as well, so I’ll be able to watch all the weird japanese movies and anime that I’ll periodically download.

One annoying thing is that before I realized that the WD TV Live is actually a viable replacement for XBMC on the first-gen Xbox I bought some new component cables for the Xbox (now that I can use it in 720p mode) and also bought an Xbox XIR kit. The XIR kit lets you install a small circuit board in your xbox (no soldering needed) that will let you turn the Xbox on with your remote control. I’m pretty psyched to get both of these, but literally, had I known about the WD TV Live two days earlier, I probably wouldn’t have bought either.

With my new HD setup, I decided I also needed a new stereo receiver that would support HDMI video switching. One of my complaints with my existing Sony receiver is that although it has inputs for s-video and composite, it won’t upconvert the composite signal to go over the s-video cable. So this was definitely my main concern when buying a new receiver. I evenutally settled on the Denon AVR-2310CI. This receiver will upconvert any component, composite or s-video signal to HDMI so that you only need a single cable going to your TV. It will also overlay a GUI overtop the video signal to let you more easily manage the receiver’s settings. It supports 7.1 surround sound, but I’m sticking with my existing 5.1 speakers for now. How many speakers does one person really need?

Unfortunately, I haven’t gotten to try out the Denon yet. I’m waiting on a batch of 6 HDMI cables that I ordered before hooking it up. Maybe I’ll post a review later.

One other gadget that I couldn’t resist buying was the Logitech Harmony One remote control. I figured that with all these new gadgets, it was finally the time for me to step up and get a nifty universal remote. For me, the kicker was that someone had posted detailed instructions on programming the Harmony One with the codes to control XBMC. I haven’t gotten it yet, but I can’t wait for it to arrive so I can start tinkering with it.

After buying all this crap, I started to feel pretty guilty and decided that enough was enough. I was not going to buy anything more. So imagine my annoyance when I realized that after all this, I still won’t be able to stream HD Divx movies from my desktop Samba share. For those who don’t know, Wireless G routers are just barely too slow to stream HD content. I think it’s really close, like if they were 64mb/s instead of 54 then it would work (for 720p anyway). Playback may work fine for the first half of a 720p movie, but eventually you’ll want popcorn and use the microwave or you’ll get some interference or whatever and the bit rate will drop to less than what’s needed, you’ll chew through the buffer and you’ll started getting choppy playback. My wireless card and my router are both Wireless G, so to be able to stream HD over that connection, they will both need to be upgraded to Wireless N — maybe another $200. At this point, I can’t justify that. On the plus side, my living room setup is connected to my Iomega NAS drive via a wired, not wireless, connection. So if I have an HD file on my desktop that I want to watch downstairs, I simply need to copy it over the wireless to the iomega first. This takes a couple hours, so it’s annoying, but at least it’ll work. Alternately, I ordered a 16gig thumb drive off ebay for $20 that I can also use to transfer the files downstairs via sneakernet.

On a final technical, nerdy note, here’s the samba configs (these go in /etc/samba/smb.conf) that I needed to change to get my share to be recognized by the WD TV Live:

[global]
; General server settings
; For netbios name, keep this short and sweet to avoid any issues
; all lower case and no special chars
netbios name = dand
; I opened it up for all hosts on my network
hosts allow = ALL
; I think setting security to “share” is key and what finally made it work
security = share
; for good measure
null passwords = true

; My original path to the shared directory was /media/disk/Azureus_Downloads,
but you want to keep the directory name at 8 chars or less, all lowercase, with
only alphabetic characters for max compatibility
path = /media/disk/download

; Again, keep the name of the share simple. No underscores and under 8 chars
[MyFiles]
path = /media/disk/download
; Pretty sure it needs to be browseable
browseable = yes
; Read-only for security
read only = yes
; I think guest mode must be enabled
guest ok = yes
; And force the user/group of your main user on the desktop
force user = dan
force group = dan

Kruger Lives!!

I just added some nice Scriptaculous effects to the Blog page. You can now drag and drop to move President Kruger’s head around. Go ahead and try it….

You could do that on the rest of the site for the past week, but the Blog page was a little harder to get it working on. To avoid screwing with the blog page layout I had to put the image all the way at the bottom of the page and then move it to the upper right corner of the screen.

Anyway, I’ve been pretty busy all around. On the music front, I’m almost finished with the first new Dr. Long Ghost song in years. An mp3 will be forthcoming. My friend Steve has been helping me to finish my basement . We got a lot of dry-walling done yesterday. Once all the drywall is finished, Steve’s gonna help me build a bar, which will be pretty sweet.

For those who don’t know, I am now engaged to my wonderful fiance Nicole. The wedding is gonna be next summer, so we have plenty of time to get ready.

And, I am hard at work on the next Validanguage release , of course. I’m gonna be adding a bunch of new features and fixing one of the more confusing parts of the API. The overall reaction to Validanguage has been pretty gratifying. A fair number of people have downloaded it thus far (thanks in no small part to the links on ajaxian and webresourcesdepot). I now have 2 people on the Validanguage mailing list.

I’ve also been playing Jade Empire for the original Xbox. It’s pretty cool. In fact, I think I’m gonna get a beer and play right now.

Copy and Paste Buttons in Linux with my Intellimouse Optical

I cannot live without copy and paste buttons on my mouse.

When I’m working on Web Development, I find myself constantly copying and pasting text and being able to do it from my mouse is a convenience I have grown to depend on. Copy and Paste are also useful in general computing terms, when moving files around in a window manager. The mouse I use at home and at work is the Intellimouse Optical.

Intellimouse Optical

Intellimouse Optical:
Featuring five buttons,
including a mouse wheel button

I have had this mouse for about 5 years now and it is very solid and dependable. Microsoft may not even make it anymore, but I have one for home and one for work, and judging by the past 5 years, they aren’t going to break anytime soon.

I have the left-side button mapped to paste (which I can trigger with my thumb) and the mouse wheel button mapped to paste. Since it is all too easy to accidentally trigger the mouse wheel button, I opted to map it to paste, which is non-destructive. I keep the right-side button mapped to delete, which is also extremely useful.

In Windows, the driver CD that came with the mouse makes it a snap to bind copy, paste and delete to the mouse buttons. In Linux, the task is a little more difficult, but after several hours of mucking around, I have it down to a science. Here’s how I got it working, documented for posterity:

  1. Replace the code referencing your mouse in /etc/X11/xorg.conf with the following definition for the MS Intellimouse:

    Section "InputDevice"
       Identifier "Configured Mouse"
       Driver "mouse"
       Option "CorePointer"
       Option "Device" "/dev/input/mice"
       Option "Protocol" "ExplorerPS/2"
       Option "Buttons" "7"
       Option "ZAxisMapping" "4 5"
       Option "ButtonMapping" "1 2 3 6 7"
       Option "Resolution" "100"
    EndSection
  2. Install the program xvkbd, which is needed by xbindkeys

  3. Install the program xbindkeys and place the following in a file named .xbindkeysrc in your home directory:

    
    "xbindkeys_show"
    control+shift + q
    /usr/X11R6/bin/xvkbd -xsendevent -text "\[Control]c""
    b:2
    /usr/X11R6/bin/xvkbd -xsendevent -text "\[Control]v""
    b:6
    /usr/X11R6/bin/xvkbd -xsendevent -text "\[Delete]""
    b:7
  4. Reboot your computer or restart X windows to load the new xorg.conf file. Then run xbindkeys in the background (xbindkeys &) and you should be all set!

Happy copying and pasting!

XBMC Rocks the Hizzy

I spent last weekend tagging my mp3s so I could set up some custom playlists in Xbox Media Center. We have 2 Xboxes in the house (one in the living room and one in the exercise room in the basement), both of which are connected wirelessly to my Windows PC in the office where I have about 80 gigs of mp3s on a shared harddrive.

XBMC has quite a number of kick-ass features, one of which is the ability to set up smart playlists using the id3 tags on your mp3s. My plan was to tag my mp3s by genre and include a “party” genre so I could setup a playlist of all party-friendly, danceable stuff.

I ran into some issues when I was trying to get all my party songs to show up in the playlist, so I posted to the XBMC Forum asking for help. Within an hour, one of the XBMC developers had replied to my post. After a little digging, he verified that I had located a bug with XBMC’s support for multiple genres in smart playlists and he was able to commit a patch to the SVN trunk and offer me a couple suggestions for a temporary workaround.

Needless to say, I was pretty damn impressed with the response I’d gotten to my forum post. Xbox Media Center is the reason why I’d never trade my Xbox for an Xbox 360 and with the project still under active development, it’s only gonna keep getting better. I’m seriously considering buying a third Xbox for the bedroom, so I can watch my downloaded Alf xvids while laying in bed.

I’m just kidding about Alf.

Or am I?

Fun with MBR and Boot.ini

I had a bit of a scare these past 24 hours when I thought that I had fucked up my newly installed and configured Windows XP PC, which for the purposes of this blog I will refer to as Lester. Lester has a 500gig SATA harddrive, as well as an older 250 gig drive, which was a holdover from my prior computer. As a result, the smaller, secondary harddrive had 3 partitions on it: an XP system partition, an Ubuntu partition and an NTFS partition with a buttload of mp3s on it.

Last night, I decided to delete the 3 separate partitions on my secondary drive and reformat the whole thing as a single NTFS partition for mp3s. I used Partition Magic and some other partition program I had snagged off BitTorrent. I was very cautious while formatting and succeeded in deleting the partitions and wiping the drive. However, when I rebooted, Lester would not start up and displayed an ominous message about no bootable media being found.

This was pretty damn confusing, since I knew that I had formatted the correct drive and was pretty sure that all my windows files were still present on the main drive. What the fuck had happened? Was it a virus?

Well, to make a long story short, I eventually was able to restore the MBR (Master Boot Record) and boot.ini on the main drive and get it to boot into Windows again (although I ended up installing Windows on the mp3 drive just so I’d have something to boot into so I could muck around with the main drive). After I had fixed Lester, I realized what had happened. When I had originally installed XP, for some bizarre reason, the boot.ini file for XP was installed, not on the hard drive containing my Windows system, but on the hard drive containing my mp3s and the other 2 unused partitions. Thus, when I wiped this drive, it blew out my boot.ini file and Lester was dead in the water.

Even odder, when I installed the temporary version of XP on the newly wiped mp3 drive to give myself access to the files on the main drive, it put the new boot.ini file on the main drive! So, at that point, boot.ini was where I wanted it and I only had to go in and edit it to tell it to load the windows install on drive 0 instead of drive 1.

In case anyone ever has a similar issue, here are the instructions I finally found on editing the MBR and boot.ini:

  1. You can try restoring the MBR in MSDOS with either the FIXMBR command or with fdisk \mbr. In my case, this didn’twork, since I was still missing boot.ini.
  2. I was able to edit the new boot.ini file with the following commands, run from the commandline (I don’t think boot.ini is visible within Windows Explorer, even with Show Hidden Files enabled):
    1. attrib -H -S -A -R boot.ini — This sets the boot.ini file as editable
    2. notepad boot.ini — Pops it open in Notepad so you can edit it.
    3. attrib +H +S +A +R boot.ini — Restores boot.ini as a hidden, system file, after you have made your edits and saved.
  3. Reboot and all is well. You can add multiple entries in boot.ini under the [operating systems] section. For me, the operative thing to change was the rdisk option. You’d think the disk option would control which harddrive is being described, but actually the physical drive is referenced by the rdisk option. Thus, I had to change:
    multi(0)disk(0)rdisk(1)partition(1)\Windows
    to
    multi(0)disk(0)rdisk(0)partition(1)\Windows

I must confess that despite the stress of thinking I had lost all my files, it was pretty damn gratifying to actually fix this myself by getting down and dirty with the MBR. Nevermind that I wouldn’t have ever had this problem if I hadn’t fucked up in the first place…