I’ve recently started putting new posts up at startops.org.
I got tired of maintaining wordpress and will be deleting this one eventually.
Amazon recently announced High I/O instances on EC2. This got some attention but not as much as deserved in my opinion.
I/O is a big problem with the cloud and was my main reason for not using it for anything of nontrivial volume. If you had a problem that was CPU and/or RAM bound then great but if you needed to run a real database you either had to keep everything in RAM (with a measly 68GB being the absolute top end per box) or spin up physical machines with other providers. The latter isn’t a real option either because the latency will wreck you if any kind of request volume is needed. In my previous benchmarking of RDS and self-installed MySQL on the largest EC2 instance types compared to metal I could spin up at Softlayer I would have needed *dozens* of EC2 instances and many multiples of overall cost per physical server to match transaction throughput.
These new SSD-based High I/O instances are the most interesting announcement I have seen from EC2, period. It fundamentally changes the way I think about EC2. I’ve done some benchmarking with bonnie++ as I typically do for all new machines and the results are encouraging. While the speed doesn’t touch what you can achieve with an array of SSDs or a PCIe flash card in a physical server, it does match and even beat what you could get out of a high end raid 10 of 15k SAS drives. That means you could run real databases and have real IO inside your EC2 environment with low latency. The pricing is entirely reasonable as well, especially for reserved instances.
The two things that would be nice to see at this point are:
- More RAM in an instance. 60GB of RAM on a machine with 2TB of fast storage is fairly silly. 60GB just is not very much memory nowadays. You can put up to 384GB in a 1U box before the cost curve starts to accelerate unreasonably when buying physical servers (basically, 32GB DIMMs are crazy expensive).
- A couple more instance options would be nice. Perhaps one smaller and then one with more RAM like mentioned above.
Overall I think this changes what sorts of projects make sense on EC2. The provisioning and auto-scaling benefits are obvious and being able to bring in real I/O as needed changes the landscape entirely.
Now if they can just work on their uptime. Barring that, just being honest about uptime would be a good next step.
I recently moved several peripheral machines to using the Percona builds of MySQL with their XtraDB storage engine. I picked up more interest around their products after attending their conference in NYC a couple weeks ago.
The performance difference is substantive and easily seen in the various benchmarks I’ve done. The Percona build hits a higher transactions/sec peak and holds out longer before InnoDB/XtraDB starts thrashing on significant concurrency. I also like knowing there is a support structure in place should it be needed that doesn’t involve talking to Oracle.
Though not yet battle tested, I believe overall it will be a good move and their stuff is certainly worth a look if you are exploring options with MySQL. They have lots of nice, tested adjustments in the code and bake in the cool addons you read about (HandlerSocket, FlashCache, XtraBackup, etc) plus you know you are working with all the latest good stuff in the InnoDB Plugin as all of that is included in the base Percona install. It’s a drop in replacement so replication, mmm, mytop, and all the other tools you are used to work just fine.
But, their packaging and installation is fairly rough. Right now 7 of the most recent 10 posts in their forum are about package incompatibilities and broken installations across Centos and Ubuntu and there isn’t much help to be found from the staff there.
Here’s how I got 5.1 to install on Centos 5.5 machines (where stock 5.0 is the default package) such that I could still install other packages like mytop that depended on a 5.0 shared library . I hate having to force packages in directly outside the package manager but couldn’t find another way given their packages collide with one another. We’re basically forcing in the shared-compat package and then forcing the shared package we want in on top of that. This was to upgrade an existing stock MySQL server.
- Backup your data completely and shove it in a safe place just in case, then as root:
$rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
$yum remove mysql-server mysql mysql-devel mysql-test mysql-bench(you may not have all of these)
$yum install Percona-Server-server-51 Percona-Server-client-51
- The server will be started if install finishes and there are no issues, kill it with
/etc/init.d/mysql stopif this is the case.
- The server will be started if install finishes and there are no issues, kill it with
$ln -s /etc/init.d/mysql /etc/init.d/mysqld(preference thing, but this way it matches the Centos init script name)
$yum install Percona-SQL-shared-compat-5.1.42-2.x86_64
- This will error out but will leave the rpm we want in yum’s cache.
$rpm -Uvh /var/cache/yum/percona/packages/Percona-SQL-shared-compat-5.1.43-2.x86_64.rpm --force
$yum remove Percona-Server-shared-51
$yum install Percona-Server-shared-51
- This will error out but will leave the rpm we want in yum’s cache.
$rpm -Uvh /var/cache/yum/percona/packages/Percona-Server-shared-51*.rpm --force
That should get things installed so try starting mysql back up and watch /var/log/mysqld.log for any errors. You may have old cruft in your 5.0 my.cnf file that is not compatible with 5.1.
Assuming it does start back up you will want to complete the upgrade:
- Run a
- Run a
mysqlcheck -A --check-upgradeto make sure everything checks out. If you get messages containing “Table upgrade required” for some of your InnoDB tables manually fix them with
alter table MY_TABLE engine=InnoDB.
That should wrap things up. No guarantees but this is what worked for me.
I’ve attended a couple conferences recently and figured I would break the long silence since my last post with a quick list of thoughts on each of them. I’ve got all the numbers for a fun “Cloud IO Sucks Worse Than You Think” post but haven’t pulled it together yet.
Definitely worth attending. It’s worth noting that I stuck with infrastructure and App Engine sessions (with one detour to hear about WebGL) and skipped introductory “101″ sessions after being told by my Google IO veteran coworkers they contained nothing more than could be read from the project overview pages online. You could have alternatively dove into Android for the entire conference.
- Well run, sessions started and ended on time, lines moved fast, registration was a breeze, lunches were excellent and served quickly.
- Some really great content and sessions, the fireside chats were quite good.
- San Francisco is a pretty cool place and a good place to have a conference. I worked out of a few different coffee shops and stopped by a meetup and the tech and startup vibe is pretty intense.
- App Engine has come a long way since I last tinkered with it and I had an itch to build something new on it after leaving this conference. They have really filled out the feature set, improved the foundational pieces, and built some really powerful services around it. You can run Go on it now as well which is pretty cool.
- The Go language has really come along and is very appealing for some of the stuff I end of working on. Read about it at golang.org, you can watch the Google IO session there as well. This was an excellent session, Rob Pike is a super smart guy.
- The loot is ridiculous and spoiling. Tablets, Chromebooks, Verizon LTE hot spots with prepaid data plans, and additional phones and gadgets depending on which sessions you attended.
- “Breakfast” was pretty weak (bagels and coffee) and the wifi was as terrible as expected.
- The demos at the after party were pretty bad. Basically 10 or so groups showing off “new technology” that boiled down to remote controlling something (poorly) with an android phone in all 10 cases.
- The smaller session rooms were not set up very well. Specifically, the slide screens were positioned very low so that if you weren’t in the front row or two you could not see the bottom 3rd of the slides (depending on how large of a head the person in front of you had). This was a big deal in sessions involving code.
- Some of the sessions were really good. I particularly enjoyed the sessions by TheLadders and ONEsite where they walked through their environments talking about servers, scripts, issues they had run into, etc. The ONEsite session got even cooler as they talked about bringing in Fusion-io drives and had numbers and discussion around the before and after.
- The closing keynote/session by Facebook was fantastic. Again got to walk through a bit of their environment, how they build their servers, hear about production issues they had faced and how they worked around them, and got a sense of their scale. They run a single, huge cluster of MySQL boxes for the most part. 13mm QPS, ~15 people managing it all if you count their engineering, performance, and ops teams. Harrison Fisk (the closing keynote deliverer) was an excellent speaker and I appreciated how quickly and openly he answered questions from the crowd. There was none of the sense of dodging and hiding that I got from some of the Google IO sessions when asked about infrastructure details.
- Walked away with a short but solid list of things I want to benchmark and try at work.
- The opening was not so good: not on schedule and the content was a walkthrough of high level press releases and sponsor pimping.
- The sessions were too short (some only 30 minutes!) with no padding in between so no hard questions could be asked and details could not be explored unless you skipped sessions and could pin down somebody.
- Many of the sessions were extremely introductory or high level, smacked of the “101″ sessions at Google IO but in this case there was no way for me to know ahead of time. As an example, the “MySQL on SSDs” session was essentially an overview of what SSDs are while I was hoping for a deep dive performance comparison that I could diff against my own testing (to be fair though, I think more details were coming but the session was so short he couldn’t get to them or answer any questions).
- I didn’t get any laptops or tablets or awesome gadgets
Percona Live 2011
This was a much smaller conference (hundreds instead of thousands). It was overall solid but I think is still trying to figure out their content and how to structure the sessions. From my perspective it started out really weak but finished strong with some great afternoon sessions and an excellent closing session by Facebook. If I could build my perfect idea of a database conference it would be 50% companies with large deployments talking about their setups and 50% deep diving into database internals and letting people ask questions.
I tried to provide my high and low points above. Don’t get me wrong while reading the negatives though, both were definitely worth attending.
I recently received notice of yet another DirecTV price increase in the mail. We had been subscribers for about 3 years and despite being very happy with the service the price was just making less and less sense.
My first response was to see if Dish or Comcast or something else would work better for us but honestly they are all so close in price and channels there is not much difference. This led to considering whether or not we could just drop satellite/cable completely. I started looking at the Apple TV, Roku boxes, etc but decided they were all just too limited. The Mac Mini is what I wanted to try but I was having to think on it a bit longer given the higher price.
Then, I got a completely unexpected check in the mail from an old job. Typically that sort of surprise would go straight into a responsible place but this one got raided for addressing our TV situation.
I cancelled DirecTV. They offered as much as 30% off my bill for 12 months and a DVR+HD box upgrade but I was determined at that point. Even if you don’t want to cancel you should bluff it and see if they’ll chop your bill down for awhile. Next I picked up a Mac Mini and several other items to put together a HTPC setup. I’ve been very happy with the result so wanted to share what worked for us. We’re using AT&T’s 6Mbps DSL service for internet.
Our TV Habits
A big factor in whether this makes sense is the amount and type of TV your household watches. In our case it broke down like this:
- Main, big HDTV in Living Room
- Major network shows (ABC, CBS, NBC, Fox, some stuff on USA).
- We rarely watch shows in real time – mostly DVR.
- American football, college basketball, occasional baseball games. This is going to be the biggest challenge. We picked a good time to start this experiment so I can go until the fall. I don’t watch a ton of games, a huge sports fan would struggle with this setup.
- Less frequently, would watch random home repair, cooking, whatever shows.
- Upstairs, smaller HDTV
- News on in the background while getting ready in the mornings.
- Occasional shows or movies but really don’t use this TV a whole lot.
We were paying about $90/month for our DirecTV service it just did not feel like we were getting that much value out of it.
Our New Setup
This is the pile of gear I ended up buying to cover everything, all links to Amazon and prices could obviously change. I had some of these items lying around already but including it all for people looking at their own configuration:
- Mac Mini – $668.69
- 2 x Terk HDTVa Amplified Antenna – $35.86 each
- EyeTV TV Tuner – $89.24
- Apple Wireless Keyboard – $69
- Apple Magic Trackpad – $69
- Magic Connector – $29.95, some guy builds these, sells them on ebay w/o tax or shipping. You could probably rig something up yourself instead.
- 2 x 3′ Coax Cables – $2.72 each
- 2-Way Signal Splitter – $0.66
That’s a total of $1,003.70, a good bit of money no question. In my case it was paid for but even without that convenient shortcut we were paying $1,080 every year for DirectTV service so after 1 year we would be saving money regardless given we are paying $0/month now. You could certainly shop around for used stuff and save a good bit of money, the prices above are for everything being brand new.
If you are thinking about doing this I would buy just an antenna first and see what kind of signal you get. Honestly the computer on the TV is pretty awesome though and would be fun to setup even if you keep your primary channel source.
An important consideration here is that the main TV with the Mac Mini is going to have access to more content and features than your other TVs. The second antenna listed was so our upstairs TV would have basic HD channels.
The setup is pretty simple:
- Connect the Mac Mini to a HDMI input on your TV. It comes with a HDMI cable.
- Connect the Terk Antenna to the single side of the splitter.
- Connect the splitter to your TV and to the coax side of the EyeTV device.
- Plug the USB side of the EyeTV into the Mac Mini.
- Switch your TV to the antenna input and do a channel scan to completion.
- Turn on the Mac Mini (I would recommend getting it baseline configured elsewhere before hooking up to TV). Go into the Displays and Sound->Output sections of System Preferences to get the video and audio how you want it.
- Install the EyeTV software, do a channel scan to completion, and pull in the TV Guide data.
The Mac Mini is a solid little machine. Its CPU is plenty powerful, the GPU in there is solid, it’s an amazingly small device, and the HDMI out makes it really convenient and plug and play as an HTPC. It detected the display capabilities of my TV so I just selected the one that looked best (720p looked better than 1080i, my TVs are a few years old so no 1080p to try).
The brand of Antenna really doesn’t matter but the Terk units are highly recommended online, reasonably priced, and work very well.
The EyeTV device is really slick. It lets you watch channels from the computer with an on screen remote you can control with the standard Apple remote. More importantly, it functions as a DVR with all the features our DirecTV DVR had. It is however only 1 tuner so that is the purpose for the splitter setup above. This way you can record a show and watch another at the same time (recording on the Mac and watching the antenna input on the TV). It’s nice having a regular computer as a DVR as you have loads of disk space to work with.
The MagicConnector+Trackpad+Keyboard makes a slick package for controlling the TV. There are some nice iPhone/iPad apps you could use instead but I really like having the full keyboard and mouse available. The connector is just a piece of aluminum with velcro pads for securing the input devices. There are some really fancy options out there but they cost over $100 and I thought $30 for this thing was already pushing it pretty severely.
I talk about this more below, but if you have other computers in the house, you can fully control the HTPC Mac Mini from those. After enabling sharing (described below) on the HTPC Mac Mini, just press command-K on one of your other Macs and enter “vnc://name-of-tv-mac” and you have full control. You can use any other VNC client if the other machine is Windows or Linux.
Programming and Features Now Available
So we now have all of the above setup and running. You actually have a ton of programming available, it just takes slightly more knowledge of the system to find what you want.
The first layer of programming is the over-the-air antenna. This brings lots of channels and many of them are really excellent HD quality. I honestly think our local HD channels look better than they did coming through the DirecTV box. It is worth noting that I live 9 miles from the center of Atlanta. Your channels may vary or you may need a better antenna if you live further out.
These are the channels I get with excellent signals. On some of the lesser channels only certain shows are HD. I have excluded all channels that don’t come in perfectly and all non-english channels. I also am not including call letters as I find those worthless.
|2.2||RTV||Old school cartoons, TV shows, and movies.|
|11.3||NBC Universal Sports||Skiing, Rugby, and other non-primetime sports.|
|14.1||ION (HD)||Reruns of shows mostly from 2005-2008, occasional random movies.|
|14.2||ION Qubo||Kids cartoons and shows.|
|14.3||ION Life (HD)||Like a TLC knock off, 30 minute food/home/life shows.|
|17.1||TBS / PeachtreeTV (HD)|
|32.1||APGuide||Guide for Atlanta OTA channels. Full of ads and commercials.|
|32.3||this TV||Old shows and movies from MGM and United Artists. Lots of kids shows.|
|32.5||Oldie||Really old movies (black and white).|
|32.7||Tuff TV||Outdoorsy shows, reruns of old wrestling, kickboxing, racing, and similar.|
|32.8||Legacy||Fishing shows, shopping shows, other random stuff.|
|32.10||Corner||24/7 Infomercials, not sure why this exists.|
|32.21 – 32.30||Various audio-only music channels.|
|36.1||MyTV (www.mynetworktv.com) (HD)||USA Network stuff.|
|63.1 – 63.5||Church Channels|
|69.1||The CW (HD)||TBS-ish channel with food/home/life shows mixed in.|
Instead of using that guide channel listed above just use TitanTV.com – it’s a free, nice guide for OTA channels.
With the EyeTV unit you can DVR any of the above and watch them all from your computer.
With a computer connected to the TV you have lots of options added:
- The major network websites (ABC, CBS, NBC, etc). Most post their shows the day after they air but Fox waits 8 days.
- Hulu – Great for watching older stuff or shows you missed though not as useful for the latter when you have the OTA setup and can access the major network websites directly. You don’t really need Hulu Plus since you are using a regular computer for access and they can’t tell you have it connected to a TV.
- ESPN3 – Lets you watch most sports live. This will be a key component of the main drawback to this setup (no ESPN). It works really well but is not HD quality and occasionally the streaming stutters. Some ISPs do not have access to this but ours (AT&T DSL) does.
- Various other channels through something that rhymes with horrent. Transmission is a super nice OSX client for this. (e.g. HBO).
It is worth mentioning some software called Plex out there for the Mac. It basically streamlines all of the above internet sources of content into a single interface that can be navigated with the Apple remote. It works really well and I have it installed but typically don’t have a need to fire it up given I have a full keyboard and mouse.
Again the main drawback to this setup in my opinion is no ESPN for sports. But, between having HD ABC, NBC, Fox, CBS and ESPN3.com I think we can manage it just fine. I would honestly pay up to $20/month for a cable or satellite service that was nothing but ESPN channels.
For movies, you can rent from Apple/iTunes, keep a Netflix membership, or rent from Amazon Video on Demand so lots of options there.
Our Home Network and Other Benefits
One big bonus to this is you have a fully baked computer connected to your TV. It makes a great central server for the house. In our case, we now have our personal Apple machines (MBP and iMac), our new Mac Mini, and an old laptop that serves as a backup location and daapd server – that setup was previously described in my Cheap Home Network Storage post.
Some steps to make your new Mac Mini easy to use as a server:
- Give your new machine a good, simple name like “tv”.
- Give it a static IP or even better use dd-wrt or similar and assign the tv computer a reserved DHCP entry. The advantage to the latter is the dd-wrt router can pass the hostname through as a DNS entry to your local network.
- Setup sharing of your picture, movie, and music folders on your other Macs. Now the new Mac Mini can play any of that content over the network and the iTunes on the new Mac can see all the other iTunes collections in the house (or any daapd share, our Linux share shows up in iTunes too). We import videos and pictures to the iMac and can now just watch those on the main TV over the network without having to take any extra steps.
- Turn on “Screen Sharing”, “File Sharing”, and “Remote Login” for the Mac Mini. You can now connect to it remotely from any machine in the house using SSH or VNC. As mentioned above, from another Mac you can just press command-K, enter “vnc://tv”, login, and you have full control. The file sharing being enabled lets you easily push files over. SSH is handy too for remote control. Play with the “say” command in OSX for some fun – can make your TV talk in weird voices to the people in your living room.
- You will probably want to play with the energy saver settings on the new Mac Mini. You don’t want it going to sleep right away. I have display sleep at 1 hour and machine sleep turned off.
That’s our new HTPC setup. I am very happy with it and we haven’t missed the satellite at all yet. We currently don’t pay for any services like Netflix or rent shows from iTunes or Amazon but those are options if we needed more content. Rumor is that Amazon is going to open up free VOD for Prime subscribers and if that happens we’ll be there.
The fall will be a good test given football gets started again but I suspect we’ll be able to manage just fine.
A Roku box or Apple TV or something along those lines that cost less might work fine but it is really nice to have a full computer connected to the TV to serve as a central home server and it’s now available for whatever service you would want running in your home.
This is my approach to dealing with email lists and spam. I’ve been using this setup (or something like it, gmail wasn’t around initially) for about 10 years and it works really well. It’s easy to setup and costs about $10/year.
This is a simpler setup that gets it done. You gain a bit more control by using a google apps account to handle this sort of thing separately but that involves several more steps and setting up MX records. This post is meant as a how to for nontechnical people unfamiliar with DNS.
Step 1: Gmail Account
Sign up for gmail. If you already have a gmail account this step is done. I recommend gmail because their spam detection and filtering options are quite good. Plus, gmail is free and feature-wise one of the best options out there.
The good spam detection is important as this account will get hit by a lot of brute force type spammers that just slam random names at new domain names. Gmail will filter all of this out so you never notice it.
Step 2: Domain Name
Register a domain name. This could be your name, or just some word you like (e.g. johnsmith.com). This will be the domain name on all the emails you sign up for things with. I recommend using namecheap.com. They just changed their interface to make it more confusing in my opinion, but are still pretty good. Here are the steps if this process is not familiar to you.
- At http://namecheap.com hit “My Account”.
- Click “Signup for an Account Now” on the bottom right and go through the process.
- Once logged in, select “Domains -> Register a Domain” from the top menu.
- Try different domains until you find one you like, add it to your cart, and purchase it.
- Select “My Account -> Manage Domains”, then click the domain you purchased from the list on the right.
- Click “All Host Records”, select the “Free Email Forwarding” radio button, and hit “Save Changes”
- Click “E-mail Forwarding Setup” on the left, and fill out the first line. Enter a single asterisk (*) on the left field and your new gmail account on the right. This will forward any email @yourdomain.com to the gmail account. You could hand someone the email firstname.lastname@example.org and it would work just fine without any extra steps, getting forwarded along to the gmail account.
Signing up for Lists
Now whenever you sign up for a list use a unique email at the domain you registered and setup forwarding on. I do this for everything and very rarely give out my main, personal email address. Some good candidates for using these:
- All politicians and political campaigns. This one is very important because political people have no respect or common sense when it comes to the internet and email. I do firstname.lastname@mydomain with the name of the candidate or occasionally include the campaign year as well.
- All utility companies (e.g. email@example.com). This can cause some interesting conversations with customer service as they will think your account email is fake but it usually isn’t too big a deal.
- All web sites. In addition to helping with spam tracking this has the added benefit of a tiny shred of extra security – you will have a different email address associated with each website (e.g. firstname.lastname@example.org).
With this setup in place you have picked up a lot of advantages:
- You can see which lists have sold you out and passed your email along to other list buyers and renters.
- Related to that, you know who the buyers are and can call them out.
- If a particular email gets out of control (say from a politician selling it to everybody with a dollar) you can setup a filter in the gmail account that just deletes emails to that address immediately, effectively shutting it down.
- If you ever change your main email account (say you switch from gmail to hotmail) you can simply update the single forwarding rule at http://namecheap.com and all of those addresses you have handed out over the years continue working without any trouble at all.
- For contests, coupons, etc that only allow one entry per email you can just create whatever arbitrary count you want. (e.g. email@example.com, firstname.lastname@example.org, etc).
- If you are a developer this can be pretty handy for testing things out end-to-end with a real email address. I’ve created random lists of thousands of emails before to test sends and can just use a quick filter in gmail to clean things up.
The command line is one of the greatest benefits of using Linux or OSX over Windows. One tip that some people do not know is that many familiar emacs shortcuts work on the command line as well. If you are not an emacs user already, here are some of the most basic but useful shortcuts:
|CTRL-a||Move cursor to beginning of line.|
|CTRL-e||Move cursor to end of line.|
|ESC-b||Move cursor back one word.|
|ESC-f||Move cursor forward one word.|
|CTRL-k||Cuts all characters to the right of your cursor.|
|CTRL-w||Cuts all characters to the left of your cursor.|
|CTRL-y||Paste cut characters at current cursor position.|
|CTRL-SHIFT-dash||Undo cuts, pastes, and typing steps.|
|CTRL-r||Reverse search through your recently executed commands. This one is perhaps the most useful of all the above. Just hit the shortcut, type the first characters of a command you want to run again, and hit enter.|
For emacs users these are well baked into muscle memory but even if you hate emacs they are worth learning for when you are working on the command line. Just the small set listed above is enough of a foundation that you can quickly make big changes to your commands with just a few key strokes and generally without having to touch the arrow keys. Combining these basics with a few additional tricks (like using “cd -” to jump back to previous path, sudo !!, etc) can really get your speed up.
In September Dell rolled out a beastly new monitor called the U3011. I’ve been fortunate enough to work with one of these things for the past week or so and wanted to share my impressions.
This sucker is a 30″ IPS display with the following goodies:
- 2560×1600 resolution (I greatly prefer 16:10 over 16:9 for doing work)
- 7ms response time
- 1 x DisplayPort
- 2 x DVI-D
- 2 x HDMI
- 1 x Component Video
- 1 x VGA
- USB 2.0 Hub with 4 out ports (2 on side)
- 7 in 1 Media Reader
- Picture in Picture
- A stand that allows adjustment of all angles and height
This is the finest monitor I have ever used. My personal laptop is a 17″ MBP and I always felt that the larger monitors I connected it to looked faded and weak compared to the bright 1920×1200 display on the laptop itself. Many monitors lately are 16:9 as well which I definitely do not like as much. That extra chunk of vertical space makes a big difference.
This U3011 looks every bit as magnificent as the MBP display except on a larger, higher resolution scale.
I had not previously used a 30″ display for long periods of time and I had heard that continuous use was bothersome for some people because of the size. I haven’t had any issues but the desk it sits on is fairly deep so that may be helping.
As a slight dig on Apple my MBP required a $100 mini-DisplayPort to dual-link DVI adapter to drive this monitor at full resolution. The Linux desktop with a $50 display card jammed in it had no trouble and required nothing special to work. You have to bring either DVI-D (you can tell by the number of pins in your connectors and the cable – see wikipedia’s DVI article if unsure) or DisplayPort to the screen to get the full 2560×1600.
Overall though this thing is glorious and highly recommended.
Magento is a PHP Zend-based ecommerce solution. Go check out their site.
Looks great doesn’t it? Magento most certainly is NOT great and this post serves as an additional warning (there are a lot of them out there now so I am just piling on) to those that are considering it for anything. These notes were made against v220.127.116.11 so some of it may have improved though I doubt anything substantive.
For all the chatter about how feature rich and impressively architected Magento is it lacks a lot of features. A sampling:
- You cannot delete an order (without directly hitting dozens of tables in the DB to do it manually).
- You cannot directly change the status of an order (without hacking on some core config files to make this possible in a buggy way).
- You cannot edit an existing order directly.
- You cannot control the initial order status of orders in a custom way.
- Canceled orders still show up in all sales reports.
- You cannot export orders in any format.
Lots of flawed stuff going on with the general structure of this thing as well:
- Magento is SLOW. Check out this post where they proudly trumpet 5 requests per second in a new version. 5 requests per second? That is some horribly slow stuff.
- Related to the above, they pointlessly used a poorly implemented EAV model for the database schema. In practice their schema generates dozens of joins on every query, makes it borderline impossible to figure out manual SQL queries, forces you to use their PHP objects to build up queries, and most importantly ensures that literally dozens of database queries run for every single page load.
- The worst combination of convention and configuration imaginable is in play. You must configure EVERYTHING with countless, verbose objects and huge balls of XML to stitch everything together and then they put a vast, undocumented layer of conventions on TOP of all that configuration to ensure no one can pick up the system easily.
- Theming Magento requires creating dozens of files in multiple folder hierarchies and modifying way too may configuration files. It is a really heavy weight process though at least this side of it is better documented than the rest.
The main overarching problem with Magento? It is overwhelmingly a vehicle for drumming up consulting and support contracts. I suspect most of the above pain points go away if you are willing to pay the $3k – $13k every year for a commercial version plus who knows how much for consulting on top of that. Its creators have no desire for anyone to pull down the software and do something useful with it without paying. The support forums are deadly silent, even the most trivial of plugins (or plugins to fill in sorely lacking features that should be built in) cost hundreds of dollars, and useful documentation is painfully difficult to find. Magento is an incredibly impressive accomplishment of architecture astronauts trying to see how complex they could make an ecommerce solution. This is the most closed and uninviting open source project I have ever encountered.
Side note, this guy’s blog is probably the best documentation I have been able to find so if you are stuck working with this platform his posts are absolutely worth a read.
That’s all I have to say about Magento. It absolutely has some positives and some very powerful features but you should stay away if you aren’t willing to pay big money and consider another solution. I would recommend checking out a hosted solution like Shopify or Core Commerce.
Amendment 1 on the Georgia ballot this coming Tuesday is a terrible thing. It has the potential to make a very real and very negative impact on startups in this state and on the appeal of working here for highly skilled and desired engineers and entrepreneurs.
You likely already know about this amendment. If you do not please read about it and about why it is a bad idea (also the borderline criminal wording this thing will have on the ballot):
In short this amendment is about noncompete agreements and making them more enforceable in the state of Georgia.
Noncompetes are the tool of incompetent, unqualified, and malicious leaders
I challenge anyone to convince me otherwise.
As an employer, are you worried about investing in the training of a new employee and then losing them to a competitor? Pay them right and treat them well and they will not leave. Are you worried about an employee finding a better way to do what your company does, starting a new company, and bringing you down? Pay them right, treat them well (and listen to their ideas) and they will not leave. Are you worried about competitors poaching your best people? Pay them right and treat them well and they will not leave.
If a company refuses to innovate they deserve to lose their best people and to be brought down by faster moving competitors.
Good leaders and good companies don’t need noncompetes. If an individual wants to foster a sterile, anemic culture and pay low wages they deserve to be alone and out of business and should not have the crutch of a noncompete to lean on.
Join the facebook group: http://www.facebook.com/GAVoteNoOn1
Tell your family and friends.