2009 post: Key Bioinformatics Computer Skills

Note: this was written in 2009 so… out of date somewhat!

I’ve been asked several times about which computer skills are critical for bioinformatics. Important – note that I am just addressing the “computer skills” side of things here. This is my list for being a functional, comfortable bioinformatician.

  1. SQL and knowledge of databases. I always recommend that people start with MySQL, because it is crossplatform, very popular, and extremely well developed.
  2. Perl or Python. Python wins now! (2017 update!)  Preferably perl. It kills me to write this, because I like python so much more than perl, but from a “getting the most useful skills” perspective, I think you have to choose perl.
  3. basic Linux. Actually, being at a semi-sys admin level is even better. I always tell people to go “cold turkey” and just install Linux on their computer and commit to using it exclusively for a while. (Due to OpenOffice etc, this should be mostly doable these days). This will force a person to get comfortable. Learning to use a Mac from the command line is an ok second option, as is Solaris etc. Still, I’d have to say Linux would be preferred.
  4. basic bash shell scripting. There are still too many cases where this ends up being “just the thing to do”. And of course, this all applies to Mac.
  5. Some experience with Java or other “traditional languages” or a real understanding of  modern programming paradigms. This may seem lame or vague. But it is important to understand how traditional programming languages approach problems. At minimum, this ensures some exposure to concepts like object-oriented programming, functional programming, libraries, etc. I know that one can get all of this with python and, yes, even perl – but I fear that some many bioinformatics people get away without knowing these things to their detriment.
  6. R + Bioconductor. So many great packages in Bioconductor. Comfort with R can solve a lot of problems quickly. R is only growing; if I could buy stock in R, I would!

This may seem like a lot, but many of these items fit together very well. For example, one could go “cold turkey” and just use Linux and commit to doing bioinformatics by using a combination of R, perl and shell scripting, and an SQL-based database (MySQL). It is very common in bioinformatics to link these pieces, so… not so bad, in the end, I think.

As always, comments welcome…


2009 post: Free, easy, quick, great PDF creation: Try OpenOffice

keywords: free software, opensource, OpenOffice, grantwriting

I try to give credit where credit is due.

I have written before about using OpenOffice (version 2.4) for “real professional work.” In an earlier post, I wrote about successfully writing an entire grant application using OpenOffice for wordprocessing and figure creation in conjuntion with Zotero for references (and the grant was funded, so…).

PDF creation from OpenOffice (use “Export to PDF” in the File menu) simply works great. It is very fast and the pdf quality is excellent. One note – it does not open the pdf automatically – it just stores the file – so pay attention to this. This works much better than printing to a pdf using the Adobe PDF printer or using the Microsoft Office 2007 export to pdf functions (which, besides being slow, caused Microsoft Office to crash occasionally on my machine).

Also, before I forget, I really like OpenOffice Draw for scientific figure creation – I use it a lot in my work and I have been quite happy with it. I’m using Microsoft Office a fair amount now, but I still use draw to make figures. I’ve used Zotero and Draw for well over a year now, with fairly intense use.

Note: This is almost entirely based on using OpenOffice 2.4. The current version is 3.0, which I just downloaded.

2008 post: TAMALg: is the package available?

I’ve received a lot of questions recently about TAMALg availability. Unfortunately, there is only a difficult-to-install package available right now; I sent it to someone recently and they had a terrible time getting it going.

I do describe the algorithm in the supplementary materials to the ENCODE spike-in competition paper (Johnson et al, Genome Research 2008).

I would love to have a simple package to distribute, but this is little supported in today’s granting environment; in fact, I don’t think that making algorithms widely available has ever been well-supported by any US funding agency. And I doubt the situation is different here in Canada.

I may be getting another undergrad soon and would task that person with working on the package. As a new faculty member, I am simply overwhelmed with basics like getting my lab going right now.

I do hope that this situation changes and thanks to all for patience.

As I have noted previously, the L2L3combo predictions produced by the TAMALPAIS server (see previous posts on this or just search for “TAMALPAIS Bieda” – no quotes, though) are the same predictions as made by TAMALg. TAMALg also adds the step of estimating enrichment via using maxfour type methodology.

So you can get good TAMALg predictions of sites just by using the webserver. I suggest going this route.

And to repeat – TAMALg is almost certainly NOT what you want for promoter arrays. Except if you have a factor in only a tiny fraction of promoters or one of the newer designs with very long promoter regions (e.g. for 10 kb promoters, might be ok).

2008 post: Linux Installation on HP Pavilion Desktop (June 2008 purchase)

This may be helpful to someone, so I’ll keep this post alive.

Mark Bieda HP Linux install installation

This is just a brief post about my (read: my student’s) experience with installing linux on a new HP Pavilion. This is a standard model available at Futureshop and BestBuy: intel quadcore Q6660 processor, 640 Gb harddisk, 3 Gb RAM. Nice machine, only $899 here in Canada (sure to be cheaper in the USA).

So I’ve installed linux on several laptops and desktops, including Mandriva, Red Hat, Fedora, Suse. And of course I have run Knoppix and, as indicated in an earlier post, have been using DSL (Damn Small Linux) under VMPlayer for a while now.

So this time, let the undergrad do it!

Here are the notes:
(1) this computer had Windows Vista on it. Home Premium edition. We wanted to keep windows, not because I love windows, but because I have some key software that only runs on windows (e.g. NimbleGen SignalMap for looking at data).
(2) Installation of OpenSuse 10.3 caused a conflict with the windows system which led to a restore operation (nothing was lost, no big deal). So we dropped working on this one – and went to working on Ubuntu 8.04 LTS.
(3) The big problem was that the ethernet card, built into the motherboard, has known problems with talking to current linux distros. The joy of a new computer!
(4) Ubuntu installed well except for the ethernet card deal, which is a big problem.
(5) To solve the ethernet card problem, we just ended up buying a new card for the computer – it was only $19.76 at our friendly University of Calgary MicroIT store. Model is “Gigabit Ethernet PCI Card” from startech.com. The model number appears to be ST1000BT32. This solved the problem, although MFU (My Friendly Undergrad) had to do something to disable the BIOS from trying to connect to the one in the motherboard (which was not deadly, but led to one of those long pauses in bootup).

The Results
Everything seems to run very well. The computer is happy, it talks to the internet (from both windows and linux) and, as usual, everything runs just a bit (or a lot, depending) on the linux side vs the windows side.

I am a longtime KDE user, and I really like KDE in this distribution (downloaded and installed as packages in Ubuntu). I guess it is technically Kubuntu, but like I said, the undergrad was doing the installation so… I got to skip on thinking about this stuff.

2008 post: Sqlite (Sqlite3) quick tips: if you know SQL already

Mark Bieda SQL Sqlite Sqlite3

I’m a long-time MySQL user, but recently I’ve been using sqlite (sqlite3).
This is a sqlite tutorial, in a sense, if you know SQL.

As with my other stuff, this is based on my real experience of using this system

Why use sqlite?
The basic thing is that it installs super fast (unbelievably, you just download a .exe file for windows and run it). This is in contrast to the big MySQL model. You get to skip all that client-server business (which is really important in many cases, but not for most stuff that I do).

installation and getting started
1. download and (on windows) just place the .exe somewhere. I like to place it in C:\sqlite3\
2. (windows) At the Start button, click Run and cmd as the run command. Go to C:\sqlite3 and run
sqlite3 temp.db

Critical stuff to know
.help — gives the list of dot commands. Important and useful
.separator "," — means to separate input and output columns (fields) by commas
.separator "\t" — same but with tabs
(important) – you have to set the separator before attempting to load data from a file into the database
.output myresults.txt — starts directing all query (like SELECT statements) output to myresults.txt
.output stdout — starts directing all query (like SELECT statements) output to stdout; will close any previous output file
.import gooddata.csv mytable — imports data from gooddata.csv to mytable using the current separator value to separate fields
.tables — a list of the tables in the database
.databases — a list of the databases
.schema mytable — statements used to create mytable; will also list indexes (useful!)

Control of Sqlite3:
Ctrl-c — ends Sqlite3
; –a semicolon must be used to end a line

A typical session
Note: I “made up” this session, so there could be a few small bugs…
create mytable (idnum varchar(20), salary float, age int);
.separator "\t";
.import persondata.txt mytable;
create index idex on mytable(idnum);
select * from mytable where age<30;

How I use sqlite3:
I know SQL “by heart”, so it is pretty easy for me to do things quickly with files, especially when I have to correlate values in files. Sometimes I reformat files in bash, perl, or more recently, Python.

Note that “sets” in Python (introduced after version 2.4) give really good database like behavior. And sets are fast, in my experience.

2008 post: NCBI GEO submission: howto hints

Ok, NCBI GEO submission of data can be a pain. I mean a big pain.
But there are a few simple things that can make it less painful.

here are my hints and a few steps:

1. Don’t assume that you will get the submission right the first time; it’s easy to have errors.
2. DO assume that NCBI will contact you requesting more information on some things. Be ready.
3. DO save all relevant files; as #2 says, you may get contacted.

And importantly:
4. Remember: some of the annoyance of the system is to ensure that in 5 years… or 10 years, your data will still be comprehensible. As opposed to having it in some weird vendor-specific format… So be patient.
5. Put that you did NCBI GEO submission on your resume. It can’t hurt.

Key Making it easier hints
1. Do all submission when the people generating the data are around. You will be surprised at little things that you need to add that are unclear.
2. You will need all the files for the experiments – you have to put raw files in as a supplement. So get the files together as much as possible.

The Steps: A Protocol
1. Search GEO for an entry that has the exact same type of data/type of array that you are submitting. This will save you huge amounts of time. You don’t want to have to redefine a platform file – it is annoying and will just cost you time and energy. And make the system worse.
2. After finding that file, you will have the platform file (the GPL file number) for the array type that you are using. Make a clear note of this!
3. (Note: there may be better ways to do this, but this works for me) Download the sample file that you found in SOFT format in full. The SOFT format makes uploading files way faster and easier.
4. The SOFT format is a text-format and the opening lines are clear fields. Open the file in a text editor (note: for windows, download and install Notepad++ to do this; it will save you a lot of pain).
5. Cut away the header (maybe 30 or 50 lines) and make a new file. Edit this file with the parameters of your experiment.
6. The hard part is this: you have to make a data file that corresponds to the platform file IDs. This is beyond the scope of this blog post; maybe I will add something about this later.
7. Make a zip file of all the supplementary files (these are the raw data files). I’ll call this SUPP.zip
8. Edit the header to reflect that you are putting in a supplementary file and add the name of this file.
9. Add your header to the datafile (made in step #6). At the end of the datafile, you need an end line. Add this. Save this file. (Again, in windows, Notepad++ is the way to go for this.) I’ll call this file FORGEO.txt
10. Create a second zip archive (I’ll call it TOTAL.zip) containing:
a. FORGEO.txt
b. SUPP.zip
c. Note: this means that TOTAL.zip has exactly two files in it (FORGEO.txt and SUPP.zip).
11. Using the validation option, upload ONLY FORGEO.txt to see if it validates. This is important! It will save you a lot of time to do this. You will get an error about a missing supplementary file, but don’t worry about that.
12. Using direct submission, submit TOTAL.zip using the SOFT option. This will take a long time to load, generally. You will get a screen asking if FORGEO.txt or SUPP.zip is the datafile. Choose FORGEO.txt.
13. You are done with one submission!
14. I suggest that you actually use more informative names than FORGEO.txt and SUPP.zip and TOTAL.zip. I actually name the files with the array number. Like 85012.txt, 85012_supp.zip and 85012_total.zip.
15. IMPORTANT: if you have a lot of files or just big files, the FTP option is best.

2009 post: TAMALPAIS: howto open files

key words: TAMALPAIS, NimbleGen, Mark Bieda, ChIP, server

TAMALPAIS is the webserver that I created to analyze NimbleGen ChIP-chip data (note that it is not for promoter data). You can find it at:


I’ve received queries from a number of people on opening files from my TAMALPAIS server.

Some people have trouble opening the files from the TAMALPAIS server, here are instructions:

1. on the mac (modern macs with OSX, not ancient macs), this should be easy – just click on the file

(one option: transfer the files to a Mac (see above). If you don’t want to do this (I wouldn’t), then continue)
1. download the FREE 7ZIP program from www.7-zip.org
2. install 7ZIP
3. right-click on the file from TAMALPAIS and select 7ZIP from the menu, select “Open archive”
4. click on the files that show up in the archive window. At any point, you can click on the “extract” button in the toolbar in the window (it is the the large “minus sign” that is blue/purple).
5. for any of the files ending with .tar.gz, or ending with .tar, or ending with .zip, you can continue to do this procedure (starting with step #3).

There are a bunch of files in subarchives (that is, in other .tar.gz files within the archive).


If you have problems, contact me using the contact information on the About page of this blog.