Share via

The long running select statement blocked the insert/update statement

Sam Chung 0 Reputation points
2026-05-08T06:42:51.8466667+00:00

The long running select statement blocked the insert/update statement execution

Scenario: 

  • We have the multi-thread batch insertion logic to insert data to MSSQL database.
  • When the insertion logic was executed, some users execute a long running select statement which both scripts are accessing the 'orders' table.
  • After select statement executed with successful result (over 45 mins), the insert statement session will be proceed.
  • Result: The insertion logic over 50 mins to insert data (almost 90% waiting time)

Solution:

  • To set the transaction isolation level to Read uncommitted to prevent 90% waiting time
  • script: SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

We would like to know the impact/action:

  • After executed "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;" (execution time, execution right, CPU/Memory usage and select statement result)
  • The rollback process
SQL Server Database Engine
0 comments No comments

2 answers

Sort by: Most helpful
  1. Erland Sommarskog 134K Reputation points MVP Volunteer Moderator
    2026-05-08T12:25:13.02+00:00

    DO NOT DO THIS! ABSOLUTELY NOT!

    Most likely this will result in the SELECT statements occasionally returning incorrect results, which users may or may not notice. If they notice, you will get bug reports you cannot reproduce. If they do not notice, they may make incorrect business decisions based on the incorrect data.

    When you use READ UNCOMMITTED, several bad things can happen:

    1. You read uncommitted and inconsistent data.
    2. You fail to read existing data, because it was moved while tables were being scanned.
    3. You may read the same data twice, again due to data movement.
    4. The query may fail with error 601.

    The correct solution to this type of blocking problem is to apply some form of snapshot solution. There are two ways to go.

    1. ALTER DATABASE db SET READ_COMMITTED_SNAPSHOT ON. With this setting, all reads will be from a snapshot which reflects how the database looked like when the query started. This setting is on by default in Azure SQL Database and is generally recommended. However, you will be reading stale data, and in some situations this can cause incorrect results, for instance with validation. This can be overridden with the READCOMMITEDLOCK hint, but you need to change the code.
    2. ALTER DATABASE db SET ALLOW_SNAPSHOT_ISOLATION ON. With this setting, code that want to use the snapshot needs to opt in with SET TRANSACTION ISOLATION LEVEL SNAPSHOT. With this setting, queries sees the database as when then transaction started.

    Both these settings activates the version store, which lives in tempdb (unless you have activated accelerated database recovery), which means that tempdb will need more space, and overall the load on tempdb will increase. It also adds an overhead to UPDATE and DELETE operations (not so much to INSERT). This overhead is usually acceptable.

    READ UNCOMMITTED a.k.a. NOLOCK can be useful for diagnostic queries, but it rarely has any place in application code.

    Was this answer helpful?


  2. Marcin Policht 89,325 Reputation points MVP Volunteer Moderator
    2026-05-08T11:10:26.4866667+00:00

    AFAIK, under the default READ COMMITTED isolation level, a long-running SELECT keeps shared locks while reading data. Your INSERT and UPDATE statements require exclusive locks, so they wait until the SELECT finishes.

    When you execute:

    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
    

    the session can read data without waiting for normal shared locks. This reduces blocking and allows concurrent inserts and updates to continue.

    The impact is mainly on data consistency. READ UNCOMMITTED allows dirty reads. A query may read uncommitted data that is later rolled back. The query can also return inconsistent or incorrect results during concurrent modifications.

    For example:

    BEGIN TRAN;
    
    UPDATE orders
    SET amount = 1000
    WHERE order_id = 1;
    
    -- Another session using READ UNCOMMITTED reads amount = 1000
    
    ROLLBACK;
    

    The second session reads a value that never became permanent.

    Execution time usually improves because sessions spend less time waiting on locks. Your inserts can continue while the long-running select is executing.

    CPU and memory usage are normally not significantly increased by READ UNCOMMITTED. The isolation level itself is lightweight.

    Rollback behavior does not change. Transactions still rollback normally. The only difference is that another session may already have read the uncommitted data before the rollback occurs.

    No special execution rights should typically be required beyond normal query permissions.

    A safer alternative would be enabling row versioning with:

    ALTER DATABASE YourDatabase
    SET READ_COMMITTED_SNAPSHOT ON;
    

    This should prevent reader/writer blocking without allowing dirty reads. Readers access versioned rows from tempdb instead of waiting for locks.

    You should also investigate why the SELECT runs for 45 minutes. The root cause might be missing indexes, table scans, outdated statistics, inefficient joins, or poor execution plans.


    If the above response helps answer your question, remember to "Accept Answer" so that others in the community facing similar issues can easily find the solution. Your contribution is highly appreciated.

    hth

    Marcin

    Was this answer helpful?

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.