javaconcurrencymultithreading

Multi-threading in Java: Concurrency Patterns from the Early Days

Java gave every developer access to threads from day one. Here is how we used them at Motorola in 1997 — and which patterns held up.

·5 min read

Multi-threading in Java: Concurrency Patterns from the Early Days

Before Java, writing multi-threaded code meant POSIX threads in C — a world of pthread_create, pthread_mutex_lock, and memory faults that only appeared under load. Java 1.0 shipped with threads as a first-class language feature. Every object had wait(), notify() and synchronized. It felt revolutionary. It also introduced a generation of developers to race conditions.

At Motorola in 1997 we were building a network management system that polled hundreds of devices concurrently and processed SNMP traps in real time. Threads were not optional. Here is what we learnt.

The Thread Model

Java threads map to OS threads. Creating one is straightforward:

public class DevicePoller extends Thread {
    private final String deviceIp;
    private final int    pollInterval;

    public DevicePoller(String deviceIp, int pollInterval) {
        this.deviceIp     = deviceIp;
        this.pollInterval = pollInterval;
    }

    @Override
    public void run() {
        while (!isInterrupted()) {
            try {
                pollDevice(deviceIp);
                Thread.sleep(pollInterval);
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
                break;
            }
        }
    }

    private void pollDevice(String ip) {
        // SNMP GET to ip, update shared device table
    }
}

You could also implement Runnable, which was preferable when the class already had a parent:

public class TrapReceiver implements Runnable {
    @Override
    public void run() {
        // receive and process SNMP traps
    }
}

Thread t = new Thread(new TrapReceiver());
t.start();

The Fundamental Problem: Shared State

Threads are useless without shared state, and shared state without synchronisation produces corrupt data. We had a DeviceRegistry that all polling threads read from and wrote to:

public class DeviceRegistry {
    private final Map<String, DeviceStatus> devices = new HashMap<>();

    // NOT thread-safe — two threads can corrupt the HashMap
    public void update(String ip, DeviceStatus status) {
        devices.put(ip, status);
    }

    public DeviceStatus get(String ip) {
        return devices.get(ip);
    }
}

The fix was synchronized, which acquires the object's intrinsic lock:

public class DeviceRegistry {
    private final Map<String, DeviceStatus> devices = new HashMap<>();

    public synchronized void update(String ip, DeviceStatus status) {
        devices.put(ip, status);
    }

    public synchronized DeviceStatus get(String ip) {
        return devices.get(ip);
    }
}

This works. The problem is that synchronized on every method means only one thread can do anything with the registry at a time — readers block other readers unnecessarily.

Producer–Consumer with wait/notify

The pattern we used most was producer–consumer. SNMP traps arrived on one thread and were processed by a pool of worker threads. The hand-off used a shared queue:

public class TrapQueue {
    private final List<SnmpTrap> queue    = new LinkedList<>();
    private final int            capacity = 1000;

    public synchronized void enqueue(SnmpTrap trap) throws InterruptedException {
        while (queue.size() >= capacity) {
            wait(); // release lock, suspend thread
        }
        queue.add(trap);
        notifyAll(); // wake waiting consumers
    }

    public synchronized SnmpTrap dequeue() throws InterruptedException {
        while (queue.isEmpty()) {
            wait(); // release lock, suspend thread
        }
        SnmpTrap trap = queue.remove(0);
        notifyAll(); // wake waiting producers
        return trap;
    }
}

The trap receiver thread calls enqueue. Worker threads call dequeue. When the queue is empty, workers wait. When it fills, the producer waits. notifyAll wakes all waiting threads — less efficient than notify for a single waiter, but safer because it avoids a class of deadlock where the wrong thread is woken.

Thread Pools

Creating a new thread for every polled device does not scale. With 500 devices at 30-second poll intervals that is 500 live threads, each with a stack. We built a simple thread pool:

public class ThreadPool {
    private final List<PooledThread> threads = new ArrayList<>();
    private final TrapQueue          queue   = new TrapQueue();

    public ThreadPool(int size) {
        for (int i = 0; i < size; i++) {
            PooledThread t = new PooledThread(queue);
            t.start();
            threads.add(t);
        }
    }

    public void submit(SnmpTrap trap) throws InterruptedException {
        queue.enqueue(trap);
    }

    public void shutdown() {
        for (PooledThread t : threads) {
            t.interrupt();
        }
    }

    private static class PooledThread extends Thread {
        private final TrapQueue queue;

        PooledThread(TrapQueue q) { this.queue = q; }

        @Override
        public void run() {
            while (!isInterrupted()) {
                try {
                    SnmpTrap trap = queue.dequeue();
                    processTrap(trap);
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                }
            }
        }

        private void processTrap(SnmpTrap trap) { /* ... */ }
    }
}

Ten workers handling 500 devices is far more efficient than 500 threads, because the threads spend most of their time blocked waiting for network I/O and the pool reuses them.

Deadlock

The most dangerous concurrency failure. We hit this exactly once in production. Two locks acquired in opposite order by two threads:

Thread A: lock(registry), then lock(alertQueue)
Thread B: lock(alertQueue), then lock(registry)

Both threads wait for a lock the other holds. Neither makes progress. The JVM provides no deadlock recovery — the threads hang forever.

The prevention rule is simple: always acquire locks in the same order across all threads. Define a global lock ordering and enforce it by convention.

What Held Up

The producer–consumer pattern with wait/notify is still the conceptual model behind BlockingQueue in java.util.concurrent, which arrived in Java 5. The thread pool concept became ExecutorService. The vocabulary changed but the ideas are the same.

What did not hold up: building your own synchronisation primitives. Every time we wrote custom wait/notify logic we introduced subtle bugs. The lesson is to push concurrency complexity down into a minimal set of carefully tested abstractions and keep application code as single-threaded as possible.