Tips for using Blobs with InterBase

Posted by on in Data

This article is 99% a copy and paste from You can see the complete thread it came from here: Messages there don't live forever (looks like about 13 months max), so I am posting here so this elaborate post may live longer. The post was written by Jeff Overcash, who is a long time TeamB member and author of the InterBase Express (IBX) components included with Rad Studio.

What you see below is partially out of context, as it comes from a discussion. However, I am hopeful the overall content will be helpful to some in years to come.

Blobs in databases should be used cautiously.  Mainly they will have 2 drawbacks.

1) They increase the file size which impacts backup restore times.  Note I 
personally used to DBA a enterprise system that was 125GB large.  I talked to 
someone at a BorCon years ago that told me he had a 1.2 terrabyte DB.  So size 
itself is not something IB has problems with.  But the larger the file the 
longer the backup and restore times will be.

Now wither your numbers and file size I suspect at most it was 2 minutes longer 
to perform your backup/restore sequence with the blob data vs without.  The rest 
of your data is small and large database B/R sequences tend to be slow due to 
rebuilding hte indexes.  Your "real" data is relatively small and that process 
should be lightning fast.  So here you are mainly seeing how fast the hard disk 
can copy 1GB of data which should be a few seconds.

2) This is the more complicated problem.  It has to do with the memory buffer 
inside the DB.  Your whole database should actually fit in RAM even on the 32 
bit version of InterBase so right now this should have little impact on you.

The problem is when you have larger databases in which partial data from that 
database ever resides in the cache at any time.  What blobs do, particularly 
large blobs, is cause the cache to never get in a steady state.

The #1 thing you can do to slow up your DB is to read/write from the disk. 
Whenever a large blob is read in other data has to be flushed from the cache 
back down to the disk.  constant reading of blobs results in the buffer always 
being highly volatile and never getting into a comfortable state,

In that case Quinn's suggestion is the best solution (don't store the blob in the datbase, just a link to a filename).  Highly accessed large blobs should be stored on a file server with the database holding the path/filename to the blob.  Now the buffer stops thrashing the disk and everything gets "calm" from an internal standpoint.

That does have other impacts like portability from system to system if you have 
to replace the server, you have to make sure those file paths are available etc. 
  But actually makes things like nightly server backups faster.

Unless your app is being deployed to a machine with very limited RAM (like 
Android or an iOS machine using IB ToGo) you probably do not require this added 
layer of complication.

Gold User, Rank: 35, Points: 77
Support guy since 3/18/85. It was dBASE II/III back then.


Check out more tips and tricks in this development video: