How to lock a record in InterBase/Firebird

<< The mystery of RDB$DB_KEY | Database technology articles | Conceptual architecture for InterBase/Firebird >>

How to lock a record in InterBase®/Firebird

Or after-dinner reasoning about optimistic and pessimistic locking.

By Alexander V. Nevsky, Alex Cherednichenko, translated by Marina Novikova.
This article first appeared on InterBase World.

Translations: Como travar um registro em InterBase/Firebird - Portugese (.pdf) and Un razonamiento después de cenar acerca de bloqueos optimistas y pesimistas - Spanish (.pdf).

Question: Can one swallow a billiard-ball?
Answer: Yes, but what for?

When newbies start developing applications with InterBase®/Firebird, the question they often ask, especially if their background is with locking SQL-servers, is how to prevent users from editing some data and overwriting other’s changes in a multi-user environment. They are looking for ways to lock records, but InterBase® and Firebird do not support explicit record locking. More precisely, Firebird lacked it before Firebird 1.5, and we will speak about this at the end of the article.

It is not normal to place explicit locks on records in a multi-generational architecture (MGA) SQL server. It is possible and, sometimes, it can even be useful. In most cases, however, the right choice of transaction isolation level and user application workflow will avoid both overloading the server and locking up access for other users.

First, it is bad practice to design applications where all data changes occur in the context of a single, long-running transaction over large volumes of records. Unfortunately, the component suite InterBase Express, shipped with Delphi and BCB, encourages such practice, especially in the Employee example, where a single TIBTransaction component operates perpetually in CommitRetaining mode. In our opinion, it makes more sense to use FIBPlus components, developed from the same original codebase (FreeIBComponents, by Gregory Deatz).

The FIBPlus approach is to allow executing a dataset’s UpdateSQL in a separate transaction from the one in which the SelectSQL and RefreshSQL operations are done. A record set is selected into a DBGrid-style control by a prolonged read-only transaction (which does not hamper OIT progression and garbage collection), whilst the data are modified by another short one, possibly with a different isolation level. For IBX there is a patch by Fanis Ghalimarzanov enabling the same functionality.

back to top of page

We believe this is the right approach, and we start by discussing its implementation. The standard reasons for using locking in database-sharing applications include:

1. Multiple users performing DML operations on the same attribute, such as "an amount of something" (goods, money, etc.). For example, warehouse staff recording incoming goods. Using Read Committed transactions with Record Version is sufficient for this, provided the client application avoids this updating algorithm:

  • Read the remainder value
  • Change it
  • Write a new value

The correct approach is to record increments, as in the statement

 update Goods_In_Stock set Amount = Amount + :Increment 
 where Goods = :Goods

The algorithm for the NoWait transaction mode (using a Pascal-like pseudocode):

 Repeat
   StartTransaction
     Try 
       Update;
       Answer:=idOK;
     Except Answer:=Application.MessageBox(‘Repeat?’ ….);
     End;
   Commit;
 Until (Answer=idOK) Or (Answer=idNo); 
 /*Ok or the user will do this later */

or simply use the Wait transaction mode. In that case, we lose the option to terminate the update and do something else while we retry.

Conflict rarely occurs. Even if multiple users are working with the same goods items, conflict might occur only in the event that another user posted changes during the interval between StartTransaction-Update, without committing. Even if this happened, we just reiterate (NoWait transaction mode) or wait (Wait).

The authors try to avoid using the Wait mode, as it requires very careful attention to data access design where multiple update statements will be executed in the transaction. In complex systems the possibility of deadlock is higher (InterBase®/Firebird error messages refer to any lock conflict as deadlock, even where the conflict is not a deadlock).

We will speak about this later, discussing new features of Firebird 1.5.

back to top of page

2. Users are competing for some resource. For example, a sales person might reserve some goods items for some customers in case they run out. The stock issue is recorded before the goods physically move out of stock. The solution is similar to the previous one, except that a Before Update trigger is added. It checks that the stock available is not negative and adjusts the client side logic accordingly. If you are out of goods, there is no reason to continue the operation.

3. The required change cannot be done by incrementing or decrementing an amount, for example when a goods item name is changed, but probablity of conflict is low. Application logic will probably allow the user to re-type new names without much difficulty. For this write, the transaction isolation level concurrency (snapshot) fits best of all.

The user navigates on datasets, chooses a record to edit and then

  • the snapshot starts;
  • the record is refetched (the data in the set may be out-of-date);
  • it is edited in the visual control and an update is attempted;

In this case the conflict will always occur in the nowait mode upon two simultaneous attempts to post changes to the same record, regardless of whether either user executed commit or not. If conflict occurs, one transaction has to be rolled back. If the write involves several SQL-statements, you need to roll it back, if it involves just one statement, you can just commit it as there actually weren't any changes done within the database. That's why in this case we recommend doing commit as rollback increases the gap between OIT (oldest interesting transaction) and OAT (oldest active transaction) and the loop repeats.

In such cases, with IBX and FIBplus, the contents of db-aware controls are lost after commit/rollback, so it is desirable to implement some kind of input caching in the client application, to preserve the user's data changes in the event of a conflict. An easy way is to use non-data-aware controls for the user input and then read the values into the db-aware ones. In Wait mode the server will wait for the conflicting transaction’s actions. If it does a rollback, our changes will be applied, if it commits, we will get a lock conflict exception and handle it in the same way as in NoWait mode.

The following cases are examples of when it is necessary to swallow the billiard-ball (see the epigram). All the approaches described below have the one disadvantage that some irresponsible user can stop his/her colleagues' work for a long time. That's why your first resort should be to try to formulate the task to work with the techniques described above, with some additional measures to control user behaviour, if required.

back to top of page

4. Editing of a text document has been done and we don’t want to get into the position of having to roll back the work and start again.

The modifying transaction starts and the first statement tries to do

 update MyTable
 set =<some scalar non-indexed attribute>=<some scalar non-indexed attribute>
 where Primary_Key_Columns=<definition of record>

This technique programmers call a dummy update. If it causes a conflict, this means that the record already has an update pending somewhere. The application gets an exception back from the API and should handle it by rolling back and not allowing the user to edit the record. Leaving it unresolved will make the record inaccessible for other users' changes (transactions) until the end of our transaction. If a dummy update succeeds, the record can be reread and edited with certainty that its changes will be saved.

If some application disregards this pessimistic locking discipline, the effects will be:

  • If the isolation level of its transaction is concurrency, conflict will inevitably occur when it attempts to post current

changes.

  • If the isolation level is read_commited rec_version, conflict will occur in the event that it attempts to post changes before our commit. If it posts the changes after our commit, our changes will be overwritten.

When working with automatically constructed data update statements, for example in the BDE TTable and TQuery components, you should remember that, if there were no changes during a session in Edit mode, the call to Post will abort and the BDE will not send anything to the server. As result there will be no dummy update.

back to top of page

5. There is an "object" with data in several tables. Operations on attributes of these tables can be done by users from different departments and their work results can influence other's actions. For example, speaking about multistage transport consignment of goods, the following tables are connected with the header table:

  • Route
  • Purchase orders and supplier invoices
  • Directives to unload/load for each point of the route
  • Projected and actual stock of transport units for each segment of the route
  • Selling documents

Any table of the "object" can be edited during its life-cycle, usually this is done step by step. For example, the traffic superintendent changes the route and works with the plan for stock loading/unloading. If at the same time the warehouse gets outdated loading instructions and begins loading, there’s a problem. And vice versa, when the goods items are already loaded and all necessary documents are being generated, the traffic superintendent may suddenly decide to send some other goods or change the destination.

In this case it is easy to apply method 4, where the logic of the client application, probably together with sets of stored procedures and triggers (if there are any) are adapted to enable a header lock attempt before attempts to work with any table. Note that this kind of complex situation does not protect data from the developer's design access mistakes or from the effects of editing that might be performed on a record in a dependent table by some third-party database admin tool. Actually only the record in the header table is locked so, if someone changes the stock leaving the header unlocked, the logical structure of the data (in respect of synchronisation of the values of related attributes) may be broken or unpredictable data integrity conflicts may arise in dependent tables not touched directly by the application.

Partially this problem can be solved in InterBase®/Firebird using referential integrity (foreign key) constraints (see issue 6).

back to top of page

6. A complicated variant of 5. You need to run and commit transactions while working in the "object" without releasing the header record. The idea to use a separate lock transaction in the header table, which locks the record, suggests itself. But there is just one snag, as usual. When trying to change records in tables referring to the locked record in the header table by Foreign Key in working transactions, you will get a conflict. The workaround is to create one more table, which has 1:1 relationship with the header one and then lock records of this table that correspond to the header ones.

Now it's time to speak about new locking abilities introduced in Firebird 1.5. A dummy update has one disadvantage: it causes triggers of the modified table to fire and it's necessary to implement conditional logic there, for the real work of the trigger, such as log writing, etc.

Firebird 1.5 includes a new functionality – With Lock option for SELECT statements. At first (in Release Candidate 3) this option was allowed for SELECT FOR UPDATE statements and later in any SELECT. We’ll consider its usage in this historical sequence, so:

 Select ... From Table [Where ...] [Order By ...] For Update With Lock.

The SELECT FOR UPDATE syntax has been present in InterBase® for a long time but it has nothing to do with locking. During execution of a normal SELECT, records are sent to the client as packets. Although records are fetched by the application one by one, the client program (gds32, fbclient, libgds, etc.) gets from the server a packet of records of a requested size and buffers it. During execution of SELECT FOR UPDATE the packets are formed of exactly one record. The next packet will be formed and fetched by the client only after the application requests it. SELECT FOR UPDATE WITH LOCK combines the functionality of SELECT FOR UPDATE with a dummy update. In other words a new version of the record is created at the moment of fetch. It is the same thing that occurs when an update is posted, except that the triggers do not fire. So this statement can be used in all the above-mentioned cases instead of a dummy update and you can forget about work of triggers. When using WITH LOCK option remember that locks are released at the moment of the transaction end, not after the query is closed.

Judging from the experience obtained in news groups, we can say that many newbies apply any functions for purposes different from those intended by the server developers. Returning to the issue of this article, in order to avoid the effort of understanding how transactions work at different isolation levels, they lock everything and lose the benefit of the fast throughput that optimistic row-level locking brings to multi-user systems. The syntax of SELECT FOR UPDATE WITH LOCK imposes locks on the entire scope of your cursor, one row at a time, until the entire cursor is fetched and locked. It is NOT row-level locking. That is why we warn you against using this feature as a workaround to avoid learning how to handle concurrency locking conflicts.

Say, for example, we need to lock 100 records. With Oracle, the same lock syntax will create a cursor at the server. Depending on the isolation level and the transaction mode, it will then resolve conflicts for the cursor as a whole. So on the client we will have either a pre-emptively locked cursor or an exception. Firebird's architecture implies no such feature so we will get records one by one, and they will be locked at the moment of fetch to the client. So if we need to do something with 100 records and not all of them can be handled, it's highly possible to operate upon 99 of them and then get a conflict at the last one.

It's possible to fetch all of the records at once, overloading the network much more than after a simple SELECT (remember that each record is formed into a separate packet with a header and a trailer). You could use the isolation level read_committed in the wait mode, but it's no good to use this feature in such a way. In theory the read_committed wait mode guarantees to 100% that you will get necessary results. But this locking mode requires more attention during access design. Let's suppose that you lock 100 records with ORDER BY ID, and your colleague does this with ORDER BY ID DESC. Deadlock occurs because one process has secured resource A and is waiting for the release of resource B. Another one has secured resource B and is waiting for the release of A. In reality deadlock may not be this obvious, as records to be locked are projected by different people (maybe even from different departments) and based on data from different tables. There may be many more than two queries involving data from different tables... Actually this is a road to hell. It's interesting that if we digress from locking, execution of SELECT FOR UPDATE WITH LOCK in the read_commited rec_version transaction will look like a change of the isolation level of the SELECT statement. It is as if it were being executed in a transaction with read_commited no_rec_version isolation.

When we were writing the first edition of this article, the actual version of Firebird 1.5. was Release Candidate 3, with no restrictions on the SELECT structure in case of using the lock option, and the syntax analyzer omitted the construction

 For Update [Of ] With Lock

After the syntax analysis the part [Of ] was simply ignored. For those who enjoy experiments with Release Candidates we will mention peculiarities of SELECT FOR UPDATE WITH LOCK execution with more complex structures:

  • a query with aggregates and GROUP BY is executed but locks nothing;
  • a query with UNION is executed but locks nothing;
  • a query with JOIN always locks only the leading table. As the optimizer can change the query execution plan at any time, this feature makes locking unpredictable;
  • a query with DISTINCT locks only last records filtered by DISTINCT during execution of the external sorting, that is in the table
    TESTDIST
    ID          ATTR
    =========== ===========
    1           3
    2           2
    3           2
    4           3
two records will be locked after execution of Select Distinct Attr From TestDist For Update With Lock. In a general case they will be arbitrary because sorting will be done by Natural;
  • a query on a read-only View executes but locks nothing;
  • probably there is no reason to mention a query on a stored procedure but we will: the query executes but locks nothing either.

After a number of experiments and discussions Firebird developers decided to enable only a simple SELECT from one table using WITH LOCK option in the next release to avoid unpredictable locking and confusion around statements which are executed but locks nothing. Now attempts to execute such statements raise an exception.

Usage of the WITH LOCK option in ordinary SELECT statement requires the same precautions as SELECT FOR UPDATE but complicated with the next consideration: you can’t easily predict how many rows will be locked at once when you open query if you select not a single row. Even if query isn't connected to DBGrid-kind control which forces a fetch of the amount of rows to be placed on the screen and you have the intention to process rows step by step, server will fetch the amount of rows enough to fill network packet or even two – one to be sent on client immediately, another to be ready for next client’s fetch request. This amount is dependent on the size of the packet used in your network, the size of the row in the table and network protocol compression algorithm. So when using ordinary SELECT WITH LOCK on cursors you lock not one row or the whole cursor but an unknown amount of rows on query opening. We intentionally recommend to use locking only with singleton SELECTs to avoid unpredictable behaviour of the application. If you still need cursor locking do fetch all immediately after opening if you want to lock all rows or use SELECT FOR UPDATE if you want step-by-step processing. Don't use this WITH LOCK option to lock whole tables. To do this take a look on consistency isolation level in the documentation.

While developing client applications, the authors use an IBX version, derived from IBX4.42 and supported by themselves. After first attempts to apply SELECT FOR UPDATE WITH LOCK they saw that IBX does not support SELECT FOR UPDATE even without WITH LOCK. Below you may see a patch for IBX, supporting this functionality of Firebird 1.5. This problem originates from the initial component set, Free IB Components by Gregory H. Deatz, that serves as a basis for IBX and FIBPlus.

The main idea of this patch is to handle SQLType SelectForUpdate as a variant of handling SQLType Select. Owners of later IBX versions can try to apply this patch or contrive their own one based on the main idea.

It's necessary to modify two modules: IBSQL.pas and IBCustomDataSet.pas Three methods of the TIBSQL class should be changed in the module IBSQL.pas: TIBSQL.Close, TIBSQL.ExecQuery, TIBSQL.GetUniqueRelationName:

 //**********************************************
 procedure TIBSQL.Close;
 var
   isc_res: ISC_STATUS;
 begin
   try
     if (FHandle <> nil) and (SQLType = SQLSelect) and FOpen then begin
     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    Replace by:
    if (FHandle <> nil) and (SQLType in [SQLSelect, SQLSelectForUpdate])
       and FOpen then begin
 ... 

 //**********************************************
 procedure TIBSQL.ExecQuery;
 var
   fetch_res: ISC_STATUS;
 begin
   CheckClosed;
   if not Prepared then Prepare;
   CheckValidStatement;
   case FSQLType of
     SQLSelect: begin
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   Replace by:
     SQLSelect, SQLSelectForUpdate: begin
 ...

 //**********************************************
 function TIBSQL.GetUniqueRelationName: String;
 begin
   if FPrepared and (FSQLType = SQLSelect) then
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 Replace by:
   if FPrepared and (FSQLType in [SQLSelect, SQLSelectForUpdate])  

In the module IBCustomDataset.pas you should change methods TIBCustomDataSet.InternalExecQuery and TIBCustomDataSet.InternalOpen

 //**********************************************
 procedure TIBCustomDataSet.InternalExecQuery;
 var
   DidActivate: Boolean;
 begin
   DidActivate := False;
   try
     ActivateConnection;
     DidActivate := ActivateTransaction; 
     if FQSelect.SQL.Text = '' then
       IBError(ibxeEmptyQuery, [nil]);
     if not FInternalPrepared then
       InternalPrepare;
     if FQSelect.SQLType = SQLSelect then
     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   Replace by:
     if FQSelect.SQLType in [SQLSelect, SQLSelectForUpdate] then
 ...

 //**********************************************
 procedure TIBCustomDataSet.InternalOpen;
 .......................
 begin
   ActivateConnection;
   ActivateTransaction;
   if FQSelect.SQL.Text = '' then
     IBError(ibxeEmptyQuery, [nil]);
   if not FInternalPrepared then
     InternalPrepare;
   if FQSelect.SQLType = SQLSelect then begin
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   Replace by:
   if FQSelect.SQLType in [SQLSelect, SQLSelectForUpdate] then begin
 ... 

FIBPlus introduced similar code changes in order to support SELECT FOR UPDATE in version 5.0. Earlier versions don’t support it. IBObjects is not widely popular in Russia so we do not know about its support of SELECT FOR UPDATE. We will be very grateful for any comments about this issue.

The authors are very thankful to Helen Borrie for her great proof-reading and comments, Dmitry Yemanov, Dmitry Kuzmenko, Nikolay Samofatov and Serg Buzadzhy for their comments and additions and to all the readers of first variants for their typo and error reports.

This paper was written by Alexander V. Nevsky and Alex Cherednichenko, translated by Marina Novikova and is copyright Nevsky, Cherednichenko and IBPhoenix Inc.

See also:
Data Retrieval
Transaction options explained
Declarative referential integrity versus triggers

back to top of page
<< The mystery of RDB$DB_KEY | Database technology articles | Conceptual architecture for InterBase/Firebird >>