Community technical support mailing list was retired 2010 and replaced with a professional technical support team. For assistance please contact: Pre-sales Technical support via email to sales@march-hare.com.
[snip] > If someone > starts a checkout while another commit is in progress then it is very > possible for the sandbox to be "corrupted" in the sense that it is > inconsistent with itself. I.e. the module compiles fine before the > commit, it compiles fine after the commit, but a sandbox > created during > the commit fails. By the way, I've seen this happen quite regularly. > > If I remember my database theory (which is suspect) then what > is needed is > for readers (of the elements being written) to be held off until the > commit is finished - i.e. the transactions need to be serialized. > No, it does not, I think. When a commit "transaction" starts, the only thing the server needs to do is to keep in memory the revision number of each file in the module the transaction occours. Every checkout and update during the transaction would retrieve the last allowed revision (wich would be the last revision before the transaction started). The new commited release of each file slhuld be available just after the transaction finishes. Maybe the serialisation of transactions are necessary only for cuncurrent commit transactions but if we have support for very long transactions it could be solved. To control very long transactions is far from easy but it's doable. I would like to see people discussing more about this just to see if what I've just said really works. > So... does the atomic commit also imply serialization? If > not, has anyone > considered work to do this? Unfortunately I don't have time to work on this, but I wish I would :-( > > - Rick > Xandao.