The standard session_set_save_handler() call is not sufficient to deal with database backed sessions. The problem comes from the fact that PHP's session implemntation is only designed to work with locks that last the entire request, and database transactions are not sufficient (or problematic at best). We need atomic operations on session data.

I've talked about this problem before on Youtube:

But, the problem is not just with PHP frameworks, it seems to be that PHP's session implementation (and the custom handler interface) is only designed to work with file locking.

To recap the video: simultaneous requests (like ajax requests) can clobber each other's session writes because the data is only read at the beginning of a request and only committed to disk or database at the end of the request. Multiple simultaneous ajax requests to pages that incrment a counter will only result in the counter increasing one time.

This is traditionally how people implement database backed sessions. They just make an object and connect it to session_set_save_handler().


$session = new MySessionDbObject();
session_set_save_handler(
    array($session, "open"),
    array($session, "close"),
    array($session, "read"),
    array($session, "write"),
    array($session, "destroy"),
    array($session, "gc")
);

The open and close handler callbacks are designed to obtain and destroy OS file locks.


// ext/session/mod_files.c
static void ps_files_open(ps_files *data, const char *key)
{
//....
			do {
				ret = flock(data->fd, LOCK_EX);
			} while (ret == -1 && errno == EINTR);
//....
}

But when most people write their open and close handlers, they simply connect to the DB, or merely return TRUE. To obtain a lock in database terms is to use a transaction. Problem solved, right?

The problem with transactions as database locks

There are 2 problems with using transactions to lock your database sessions. The first is that MySQL doesn't support nested transactions, so commiting any other transaction during the request will release the lock on the session table.

Secondly, even if you were using Postgres or another DB that supports nested transactions... is having hundreds of concurrently running transactions really the right way to use your resources? I don't think so, it would be safe, but not overly performant.

The majority of your requests will probably not alter session data at all, so obtaining an exclusive lock is just a waste of resources and synchronizes requests that could be served in parallel for no reason.

Reduce your exposure time with atomic operations

The real solution seems to be only to request locks right before doing a write operation - rather than at the beginning an end of an entire request.

But, using PHP's session extension, you only commit all of your session changes at the end of the request or with session_write_close (which can only be used once per request).

The solution is to not use PHP's native session extension at all - never access $_SESSION in your code, always use a custom session object.

If you design a session object that supports atomic operations, you can obtain a short lived database lock only when you change the session data, leaving the majority of read-only requests free from using resources. This will reduce your exposure to time spent in lock contention to only when necessary.


//compare the following code
$session = new MySessionDbObject();
$val = $session->get('key'); //if val is 1
$session->set('key', $val++); //this will write 2

//and

$session = new MySessionDbObject();
$session->increment('key');

In the first solution, there is the possibility that two requests operate on the value of $key simultaneously, both will clobber each other with the value of 2, when it should be 3.

In the second example, we're pushing the concept of incrementing as close to the database as possible - resulting in an atomic operation.


class MySessionDbObject {
	function increment($key) {
		//block to obtain db exclusive lock
		//read all session values again
		$this->db->execute("select FOR UPDATE * from sess where id = '%s' LOCK IN SHARE MODE", $this->id);

		//update current state of session data
		//because other requests may have updated data since we started execution
		$this->sessionData = unserialize($dbrow);

		//atomic operation
		$this->sessionData[ $key ] ++;

		//write data to db
		$this->db->execute("update sess set data='%' where id = '%s' LOCK IN SHARE MODE",
		    serialize($this->sessionData), $this->id);

		//complete transaction / release lock
		$this->db->execute('commit');
	}

	function decrement($key) { ... }
	function appendToArray($key) { ... }
	function removeFromArray($key) { ... }
	function setValue($key, $val) { ... }
}

By converting our changes to session data into commands like increment, decrement, etc. we move the operation away from our code and push it closer to the database. This gives us the ability to make sure that they happen in an atomic way on the most recent data. Using atomic operations on the database will remove the ability to overwrite data from two or more competing requests.

Using memcache as a distributed lock

You can use something like memcached to implement a locking strategy for your database calls and avoid both atomic operations and PHP's session extension. But, you need to implement some timeout (ttl in memcached) to avoid deadlocks. See, when PHP request finishes or abruptly dies, the OS will clean up file locks associated with that process. When a database connection dies, the SQL server will rollback transactions opened by that connection, but no such failsafe exists for memcached. So, you need to set an appropriate ttl for your locks to cover your longest sequence of operations... the longest time you expect any one operation to wait in a queue for access before it gives up and throws an error.