necovek
The one missed distinction is that concurrent tasks can be executing in parallel, it just doesn't imply they are or aren't.

Basically, all parallel tasks are also concurrent, but there are concurrent tasks which are not executed in parallel.

duped
If you prefer to learn by video, here's an excellent talk on the same subject by Rob Pike that I link all the time to people

https://www.youtube.com/watch?v=oV9rvDllKEg

rdtsc
I like to think of them as different levels. Concurrency is at a higher abstraction level: steps that can execute without needing to wait on each other. Parallelism is a bit lower and reflects the ability to actually execute the steps at the same time.

Sometimes you can have concurrent units like multiple threads, but a single CPU, so they won’t execute in parallel. In an environment with multiple CPU they might execute in parallel.

CalRobert
I just learned something! I realize now I was talking about parallelism in a recent interview question about concurrency. Oh well.
dragontamer
Concurrency often is about running your I/O routines in parallel, achieving higher bandwidth. For example, one computer handling 50 concurrent HTTP requests simultaneously.

No single HTTP request uses all the CPU power or even your Ethernet bandwidth. The bulk of your waiting is latency issues. So while one task is waiting on Ethernet responses under the hood, the system should do something else.

Hard Drives are another: you can have random I/O bandwidths of 5MB/s or so, but every request always takes 4ms on the average for a 7200 RPM drive (aka: 120 rotations per second, or about 8 miliseconds for a complete rotation. So 4ms on the average for any request to complete).

So while waiting for the HDD to respond, your OS can schedule other reads or writes for the head to move to which improves average performance (ex: if 8 requests all are within the path of the head, you'll still wait 4ms on the average, but maybe each will be read per 1ms).

----------

Parallelism is often about CPU limited situations where you use a 2nd CPU (today called a core). For example, if one CPU core is too slow, you can use a 2nd, or 8 or even 128 cores simultaneously.

------------

Hyperthreads is the CPU designer (Intel and AMD) that the above concurrency technique can apply to modern RAM because a single RAM read is like 50ns or 200 clock ticks. Any RAM-latency problem (ex: linked list traversals) would benefit from the CPU core doing something else while waiting for the RAM latency to respond.

-----

Different programming languages have different patterns to make these situations easier to program.

bryanrasmussen
thinking about this - is there a term for tasks which are partially parallel, that is to say X starts at 0.1 and ends at 1 and Y starts at 0.2 and ends at 0.9 - X and Y are not parallel, but they are something that I'm not sure what the technical term for is. (this is assuming they are not executed concurrently either of course)
bugtodiffer
Learn Go and you will understand concurrency