A bit of background on what I am trying to do.
I have some C code that works on am embedded system using its own “OS/Kernel” and I’m trying to do something similar in my App so they can use a Mac instead of the embedded system (which is very hard to maintain since its very old and not supported anymore).
This code works with two threads, in a producer/consumer model. The cool thing about this Kernel is that it supports what it calls Queue Semaphores. A Queue Semaphore automatically performs the Thread Switching and Queue Management in a Thread-Safe Manner.
The pseudo code for the task I trying to reproduce on the Mac allows it to be coded really elegantly in my opinion.
myData = GetQueueHeadAndWait(queue);
// Process Data
Code to Process the Data
// Free Data
// Never Reached
(int) taskProducer((data*) theData)
// Check “theData” against the Head Item of the Queue and either allocate a new block and add it to the Tail or overwrite the existing Data in place.
newBlock = CheckData(queue,theData);
if (newBlock == TRUE)
// Allocate Buffer
myData = alloc(sizeof(data));
// Code to Copy from theData into myData
// Add Data to Queue, this causes the task to Consumer Task to become “fire” if it is asleep
myData = GetQueueHeadNoWait(queue);
if (myData == NULL)
// Fatal Error
// Code to Copy from theData into myData, no need to add it to the queue, because its already there
The way this works is that:
The Queue Semaphore is created empty, then the two tasks are created. One of the two tasks will get control first, if its the taskConsumer, then the call GetQueueHeadAndWait will cause the task to suspend (since the queue is empty). At some point taskProducer will run, which takes the data from the source and Adds it to the queue. If there are tasks waiting on the queue then the oldest task is given the data and it is made ready so that when it gets control, the return from GetQueueHeadAndWait return with the Data received. If there are no tasks waiting, then the data is just added to the queue (or overwritten).
There are various other functions in the Kernel that allow it Lock the Queue, set Task Priorities and things like that. In this case, the two tasks have equal priority and the Queue Semaphore used is private to these two Tasks.
I think this solution is so elegant and I’m trying to see if it is reproducible on the Mac/Cocoa and not having much luck so far. The beauty of it is is that it is completely self-regulating and self-synchronizing. Also this mechanism can re-used for different scenarios with very little tweaking, for instance, if you use two Queue Semaphores you can do things like:
FreeBufferQueue; Set to N Buffers when Queue is created.
// Get a Free Buffer, Suspend if there are none (e.g. stop producing).
// Fill Buffer
// Add Buffer to Process Queue
// Get a Buffer to Process, Suspend if there are none (e.g. stop producing).
// Process Buffer
// Add Buffer to Free Queue (allow Producer to start producing again, if it ever stopped)
This automatically throttles the Producer/Consumer to N buffers!
Any comments or suggestions greatly appreciated.
All the Best
On Mar 7, 2018, at 03:16 , Dave <dave@...
it stays on the queue until the NSOperation method gets called which check the “Cancel” property
You’re right, I misremembered. GCD has a DispatchWorkItem “cancel” method, which doesn’t have any proper documentation but I think dequeues the item (since GCD doesn’t have an actual cancellation protocol).
However, you do have to be careful. There are a of thread safety traps involved in removing things from the queue, and/or “overwriting” state in the top queue entry. The NSOperationQueue mechanism is at least safe in that it helps prevents you from doing unsafe things, at the cost of having to flush the cancelled entries from the queue. Since the cancelled items have to be dequeued anyway, I’m not sure the additional cost of letting them run to find out if they’re cancelled really matters (unless there are going to be thousands of them).
Anyway, if you use a solution that allows to to dequeue items, make sure you cover the case where the item has begun executing after you’ve decided to lock the queue but before you succeed in locking the queue.