One thing that people are working on is taking those
concepts of transactions and moving them more into the
programming language. You want to do transactions in
memory. This allows you to get rid of a lot of the locks
you see in pure, kind of normal concurrent programming.
I think there’s a nice technology transfer there where
we’re moving not only queries into the programming
languages, but also optimistic concurrency and transactions as well.
TC I think the key is essentially that notion of moving
the transaction into the programmer’s world, and that’s
really what has happened for us. We say to programmers:
start a transaction, go touch whatever objects you need to
touch, and commit it when you’re finished.
What’s interesting for us is that we’re still working
with SQL Server in a mode where we’re not using optimistic concurrency control. One result is that although
we have made it so programmers are no longer concerned
about concurrency, we have created this situation where
we very frequently get deadlocks in the database. It has
been challenging for us, particularly in the context of
NHibernate, because we end up wanting to be very careful about the locking that’s happening in the database.
You really think that the future of this is more in the
optimistic concurrency control end of things?
JB Definitely. In fact, this trouble that you see in the
NHibernate way of marking things perhaps is because the
actual mapping is not entirely abstracted. In the Entity
Framework, we know how the entities and the objects are
assembled together in a set-oriented way from the underlying tables. Therefore, when the time comes to push an
update down through mapping to the database, we know
precisely how to sequence the ordering in which these
updates need to be applied to avoid deadlocks.
TC I guess it’s really more of a question of whether
those kinds of features need to bubble up into the query
language. What we’re finding is that when we’re making
queries at the level of NHibernate, we really want to be
able to provide locking hints on those queries—to say
you’re going to need to do this as an update lock, not as a
shared lock, when you read this object in initially because
we know that we’re going to write it back later on. If you
don’t do it as an update lock, we know that we’re going
to end up getting a lot of deadlocks from multiple people
coming in at the same time.
The real question is, does the notion of concurrency
control, the locking modes, have to float up to the highest levels of the system to allow people the degree of
control needed to do these sorts of things?
JB I don’t think so. If you have a system that finds it
necessary to expose those physical concepts up to the
program, then you’ve defeated the purpose of the abstraction. If you think about programming at the level of the
database, say, in the form of a stored procedure using
PL/SQL (Procedural Language SQL) or Transact SQL, you
don’t need to expose those kinds of things. They are
available but their use is discouraged because the system
should be able to handle that abstraction on behalf of the
user. So my approach to this would be to resist the need
to expose those locking constructs to the programmer as
much as I can, and really work on cleaning up and fixing
the mapping abstraction.
TC Imagine you’ve got the same kind of system that
we’ve been talking about. It’s basically objects and transactions in nature with that notion of “serializability” and
stuff like that. You’ve got a workload coming in and there
are many independent threads of control. What happens is that those independent requests are all sharing a
common set of objects. What we found when we built
this system in the most straightforward way was lots of
aborted transactions because of conflicts between the
requests coming in.
Do programmers have to worry about this, or do LINQ
and the Entity Framework again have a way of helping us
deal with this kind of stuff?
JB This is another scenario that is somewhat related to the
need for tools, not only for mapping but also to be able to
handle these potential concurrency issues. If we were able
to build a workload-oriented tuning wizard that could
take the workload of the application and its concurrency
characteristics and predict where you’re going to have
lock contentions, where you’re going to have deadlocks,
that would be the way to solve that issue or to help mitigate it. Lacking those tools, the developer has to figure
out a way to overcome those real situations.
EM Generally, you can raise the level of abstraction. To
me that means taking away irrelevant details, but that
means there are still details that are relevant. For example, what you describe is a relevant detail—the underlying system gets in trouble because you get a deadlock. By
definition that has now become a relevant detail, so you
have to take care of it. There’s no magic. There’s no such
thing as a free lunch.
TC I guess we’ve all been involved in developing long
enough to know that. Q
LOVE IT, HATE IT? LET US KNOW
firstname.lastname@example.org or www.acmqueue.com/forums