Scala: Up and running

Streaming tests with SBT in one window, simple text editor in another leads to a very quick feedback loop. The overall experience is quite pleasant!

Here’s how I set it up.

First, ensure SBT is installed. This can be verified using the command sbt sbtVersion. Currently I have 0.13.7 installed.

Next, pick the directory to work in and create a file named build.sbt in that directory. This is where all of the project specific SBT configuration will end up, at least for as long as the project’s build doesn’t become too complex.

The most basic build.sbt I use looks pretty close to this.

lazy val commonSettings = Seq(
  name := "example",
  organization := "com.jjoedouglas",
  version := "0.1.0",
  scalaVersion := "2.11.6"
)

lazy val root = (project in file(".")).
  settings(commonSettings: _*).
  settings(
    libraryDependencies ++= Seq(
      "org.scalatest" % "scalatest_2.11" % "2.2.5" % "test"
    )
  )

There’s only one dependency, scalatest, which we’ll need for streaming unit tests in sbt later on.

Now it’s possible to kick off streaming tests in SBT. In your console window navigate to the projects root directory, the one with build.sbt in it, and execute the command sbt ~test. The ~ prefix causes SBT to keep watch over the files in the directory and execute the previous command whenever they change. So we could do something like sbt ~compile if we wanted to, but sbt ~test gets us the compile step as well.

The project is structured similarly to how projects in Java are structured. We’ll add the following folders and files.

\src\main\scala\example.scala
\src\test\scala\example.scala

Now we’re in TDD land, so let’s create a test first. Scalatest is rather configurable and is worth the time to investigate on it’s own, however I will gloss over it here and just use a flatspec. Feel encouraged to check out the link in the references for a better idea of the flexibility of scalaTest. Here’s the contents for the new files.

\src\test\scala\example.scala:

package com.jjoedouglas

import org.scalatest._

class ExampleSpec extends FlatSpec {
  "An example" must "say hello" in {
    assert(Example.speak() == "hello")
  }
}

\src\main\scala\example.scala:

package com.jjoedouglas

object Example {
  def speak(): String = {
    "Hello"
  }
}

Hopefully while adding files you saw SBT attempting to run tests, saw build failures if you created the tests first, and saw the test ultimately passing.

I did run into a small gotchas while writing this. sbt ~ test does not work the same as sbt ~test. The former will wait for changes and do nothing with them until you force it to exit, then it will run tests, and the later will run tests after changes have been made. This is a little odd as the documentation makes it look like ~ test should work from the sbt console.

References:
ScalaTest – Selecting test styles for your project
Getting Started with sbt – Hello World
Getting Started with sbt – Running

What if: inheriting functions

What if functions could be inheritted the same way as classes?

This thought occurred to me while I noticed some repeated code between a few methods.  After talking to a friend about it for a little bit, I realized this is just another route to higher-order functions (albeit less flexible).  While in the end it wouldn’t be all that more useful than anything already available the journey does illuminate some other ways to look at functions and methods.

Inheritance is something that happens between types.  A derived class IS-A member of the base class.  So what is the type of a function?  It’s a parameterized (aka, generic) class with a variable number of type parameters.  There’s one for each argument and a final one for the result.  For example addition is a function that takes two numbers and returns a number, or `Func<Number, Number, Number>` in pseudo-C# parlance.  There’s one method on the function type, called apply.  It takes the arguments and returns the result.

If a function is derived from an already specified function, that necessitates the input and output types must match.  After all, the derived function IS A member of the base function.  The only possible difference would be what happens during apply.

This has been abstract, so seeing at least a loose example would be useful.  This isn’t supported in C# (nor should it be), so the syntax is going to be creative. The scenario is a web application that accepts an ID and if it can’t find the object identified it returns a 404.


public ActionResult Get(int id) {
  var thing = db.query<Thing>().FirstOrDefault(x => x.id == id);
  if(thing == null)
    return HttpNotFound();

  return doSomeInterestingStuff();
}

This gets done across a number of methods, so we pull it out into it’s own function which we can then inherit with our fictional feature.

private ActionResult GetOr404(int id) {
  var thing = db.query<Thing>().FirstOrDefault(x => x.id == id);
  if(thing == null)
    return HttpNotFound();

  //so it's not clear what to return here..
}

So it’s not clear what to return from that method. Another piece is needed. This piece would be to declare the function as “abstract-ish”, to indicate there is some part of functionality that is not defined and needs to be for the function to be callable. The deriving class would fill that in.

private ActionResult GetOr404(int id) {
  var thing = db.query<Thing>().FirstOrDefault(x => x.id == id);
  if(thing == null)
    return HttpNotFound();

  return childApply(id);
}

public ActionResult OriginalGet : GetOr404 {
  return doTheInterestingThing();
}

Now there’s another oddity. If childApply has the same signature as it’s parent method, then the same operation needs to be repeated (retrieving the thing).

This is just a re-implementation of higher order functions. Higher order functions are simply functions which take a function as a parameter and/or return a function. This would be implemented as follows.

private ActionResult GetOr404(int id, Func<Thing, ActionResult> theInterestingThing) {
  var thing = db.query<Thing>().FirstOrDefault(x => x.id == id);
  if(thing == null)
    return HttpNotFound();

  return theInterestingThing(thing);
}

public ActionResult Get(int id) {
  //where doTheInterestingThing is defined to take a thing
  //and do some more interesting computation
  return GetOr404(id, doTheInterestingThing);
}

Quick sort

Quick sort in general performs a little faster than merge sort but offers a lot of variability.  It’s possible to get merge sort running time performance with less memory used via the variations on quick sort.  I won’t go into the variations beyond just mentioning them and what they provide.  Here’s the algorithm:

  1. If the array has one or fewer elements, it is sorted.
  2. Pick an element from the array, it’ll be referred to as the pivot.
  3. Move the pivot to the end of the array then scan the array from the element just before the pivot to the end.  Partition the array so that all elements to the left of the pivot are less than or equal to the pivot and all elements to the right are greater than or equal to the pivot using the following algorithm.
    1. If an element is greater than the pivot, swap the pivot and that element
    2. If an element is less than the pivot, continue scanning and swap that element with the next element found which is greater than the pivot.  After that is done, swap the pivot with the freshly swapped element which is greater than the pivot.  Scanning can continue from the location of the element less than the pivot that was just swapped.
  4. To the left of the pivot are all elements which are less or equal to the pivot, to the right are all elements which are greater than or equal to the pivot.  This means the pivot is in its sorted position.
  5. Repeat from step 1 for the array of elements of the left and the array of elements on the right of the pivot.
  6. Once all subarrays have been sorted, the input array has been sorted.

The quick sort name doesn’t really give much insight into how the algorithm works.  Wikipedia says it is sometimes called partition-exchange, which is a mouthful but does give some idea of what the algorithm does.

The good:

In the average case quick sort has O(n log n) running time and can perform about two to three times faster than merge sort.  The algorithm given above won’t be able to achieve that however.

Using an in-place implementation of quick sort (that is, an implementation that modifies the input array) a space complexity of O(log n) is possible.  This is achieved by using a variation described by Robert Sedgewick.  This variation uses an in-place and unstable partitioning algorithm, which is the same as described above in the algorithm description.  Once partitioned, the smallest partition is sorted recursively while the larger is sorted via tail recursion.  If a compiler supports tail recursion optimization this leads to log n stack frames, indicating a memory complexity of O(log n).

The worst case performance of O(n²) can be avoided by intelligently selecting a pivot.  Robert Sedgewick to the rescue again!  By picking an element that is the median of the first, middle, and last elements the algorithm can select pivots which are closer to the true median of the array.  This causes significantly less swaps and comparisons having to be done when sorting already sorted arrays.

At the cost of space complexity quick sort can be made stable.

The bad:

It is very easy to implement quick sort that can perform like a worse merge sort.  Given an already sorted array, a poor choice of pivot, and a less space conscious partitioning algorithm a quick sort implementation can have O(n²) running time and O(n) space complexity.  This is worse than even insertion sort!

The name is horrible.  Quick sort could be implemented and used on values which make it slower than merge sort.  If that’s the case then it’s name is a complete misnomer.  Also a name like quick sort can mislead the uneducated.  Consider the case where one does not know about the differences between merge sort, insertion sort, or quick sort and has an array of nearly sorted items.  This person may trust the name and use quick sort, which may perform abysmally!

P.S.  Robert Sedgewick teaches a few algorithm courses on Coursera.  I found them difficult but completely worth it.

Merge sort

Merge sort is a slightly more involved sorting algorithm that trades increased memory usage for shorter running time.  Here’s the algorithm:

  1. Split the unsorted array into individual elements.  These individual elements are now that many sorted arrays (an array with 1 element is “trivially sorted”).
  2. If there is one (or zero) sorted array(s), the input has been sorted and the algorithm halts.
  3. Take the two smallest sorted arrays and merge them together using the following algorithm
    1. Take the indices of the first elements of both arrays.
    2. Compare the elements at the indices
    3. Put the lowest element into the first position of a new sorted list and advance the index of the lowest element to the next element in its list.
    4. If there is no next element, put all succeeding elements in the other array at the end of the new sorted list and stop merging.  The original sorted lists are removed from the set of sorted lists and the new sorted list is put in their place.  Continue from step 2.
    5. If there is a successor element in the array, continue from step 3.2.

Merge sort gets its name because it merges sorted lists together.  This merge operation is pretty efficient, as it only needs to do a number of comparisons equal to or less than the total nodes in both lists to be merged.

Merge sort is a divide and conquer algorithm.  It takes a larger problem and breaks it down until it has many smaller simpler problems, then builds up from there.  It uses the fact that an array with a single element is, by definition, sorted to get started.  The merge operation will maintain the sorted nature of the resulting merged array if the originating arrays are also sorted.

The good:

Merge sort has a good run time complexity of O(n log n).  This comes from the fact that merge operations are O(n) themselves.  Since we subdivide the the initial input and stitch it together we end up doing these operations log n times.  Often times one can get a whiff that a log n factor is at play because the execution will resemble a tree.  In merge sort’s case we can see that after splitting out the input into individual nodes, each time we join successively larger lists via the merge operation.  The number of times we do that will be equal to log n since we join the smallest lists, so two single element lists become a two element list, which becomes a four element list, etc until we get to the final list.

Merge sort can take some advantage of already sorted lists if the merge operation is setup to do so.

Merge sort can be implemented to be a stable sort.  That is, it won’t swap two elements which have the same sort order.

Merge sort does well with sequential access data structures (linked lists, cons lists, etc) where many others have trouble.  Some languages make such data structures very easy to use (lisp, haskell, etc).

The bad:

Merge sort will use an increasing amount of memory the larger the input array gets.  If space is a concern then merge sort might be out of reach.  This extra space comes from the extra arrays the merge operation must create.  Since the original arrays can be discarded as it runs, it is only the last array that sets the maximum memory used by merge sort.

The bottomline:

Merge sort allows a trade off between space and time complexity.  It’s very common to want to make this trade off, as space can be incredibly cheap.  Merge sort is also stable and can work well with sequential data structures, both attributes that some similar algorithms do not have.

Selection Sort

Selection sort is another simple sorting algorithm.  Here it is:

  1. Assume the entire set is unsorted
  2. Get the index of the next element (or first element if an element hasn’t been selected yet).  If there is no next element to get the index of, the set is sorted and the algorithm halts.
  3. Scan the set of unsorted elements for the smallest element
  4. Swap that element with the index found in step 2.
  5. Repeat at step 2.

It’s called selection sort because it selects the next element of the sorted set from the unsorted set and appends it to the sorted set.

The good:

Selection sort has a really simple algorithm.

In the worst case, O(n) swaps are used.  It’s not possible to get any better than that!  If swapping is expensive selection sort (or one of it’s variants) becomes attractive proportional to just how expensive swaps are.

Selection sort has constant memory usage.  Only one extra pointer is needed to facilitate swapping elements, no matter the number of elements.

The bad:

Selection sort has O(n²) running time complexity.  The number of nodes traversed is n(n+1) / 2, which simplifies to (n² + n)/2.  This isn’t as bad as insertion sort but as the number of elements approaches extremely large numbers both algorithms will struggle to provide a sorted set in a reasonable amount of time.

Selection sort performs just as poorly on an unsorted set as a sorted one.

The bottom line:

Selection sort uses the minimum number of swaps, which can make it desirable in specific scenarios.  It also has slightly better running time than insertion sort if the set isn’t likely to be partially sorted, of course if the set is likely to be sorted then insertion sort will perform significantly better.

In short, reach for selection sort if swaps are expensive but avoid it if a large number of elements need to be sorted.  It also does not benefit from partially sorted input where other algorithms do.

Insertion Sort

Insertion sort is an extremely simple sorting algorithm.  Without further ado, the algorithm!

  1. Given an array, consider the first element sorted and all after to be unsorted.
  2. If no unsorted elements exist, the array has been sorted and the algorithm can stop
  3. Check the first unsorted element and compare it to the last sorted element.
    1. If the unsorted element is greater than the sorted element, it can be considered to be part of the sorted list.  At this point the algorithm returns to the second step.
    2. If the unsorted element is less than the sorted element
      1. Note the index of the sorted element that was compared against
      2. Identify the next highest sorted element (it’s going to precede the sorted element) and execute step 3 until a sorted element is less than or equal to the unsorted element.
      3. Shift all elements with index greater than or equal to the recorded index from step 3.2.1 up one position and insert the unsorted element into the recorded index.

It’s called insertion sort because it takes unsorted elements and directly inserts them where they’d be in the sorted list.

The good:

Insertion sort uses a constant amount of memory.  All that is necessary is an extra pointer to hold an element as it’s being swapped.  Insertion sort could be given a list of a million elements (the horror!) and it would still only need one extra pointer.

Insertion sort is stable.  That is, it won’t swap the positions of two elements if they’re equal.  If elements A and B have an equal ordering but A appears before B, insertion sort will not accidentally swap these.

 

The bad:

Insertion sort has O(n²) running time complexity.  Consider the worst case scenario where an array is in the reverse order.  Every unsorted element will trigger a traversal of the entire sorted list.  This means every element will be visited once for every element in the array.  Worse still, if shifting elements is done via swaps, those nodes will find themselves also swapped just as many times.

While insertion sort uses constant memory, it does move elements around a lot.  If that’s an expensive operation then insertion sort is a non-starter.  An example of a scenario where swaps are an expensive operation would be attempting to sort a set stored in a remote store which is too big to pull down in it’s entirety and pushed back in sorted form.

Bottom line:

Insertion sort is very good at sorting very small sets quickly and efficiently.  It’s also good at dealing with sets that are mostly already sorted.  In cases where that is likely, it’s a good fit.  However as sets increase in size and aren’t already sorted, the performance falls through the floor in short order.

In short, use insertion sort if it’s going to be a small number of mostly sorted elements.

Don’t fear type inference

This post is about the var keyword in C#.  The var keyword in C# is not the same thing as the var keyword in Javascript.  I’ve seen this get confused, var does not allow loose typing in a statically typed language.  Var provides type inference which is a compiler feature that deduces the actual type and plugs it in at compile time.  Type inference allows the developer to ignore the type but know that it will be checked at compile time.

Wait, why would I want to ignore the type?

Generally speaking, the less information one must have to understand a given piece of code, the better that code is considered to have been written.  The var keyword allows some redundant phrases to be stripped out of the code, consider the line `Foo barModifier = new Foo();`.  I’ve been told twice that this is a Foo, those were by the constructor and the type signature, var allows me to remove the type signature message and get on with what’s actually happening.

Var offers some flexibility for refactoring code.  If a variables type is going to be inferred and I change the type of the instance, this means there’s less code for me to update throughout the method.  To be honest if the code is written well this should be a small savings as methods should generally be kept short and direct.  When working in code bases which are not friendly to var, I will use var as I’m developing and before checking it in will convert all instances of the keyword to the explicit type (resharper is very useful for this, two keystrokes and I’m done).

But I heard var makes code less readable

The arguments that var detracts from code readability come more from other practices detracting from code readability than type inference.  For example the claim that var leads to longer Hungarian style variable names.  I’ve seen this done just as frequently when the type name is explicitly stated.

Have you ever seen a parameterized type which takes a parameterized type as an argument?  How about three levels or more deep?  Take for example Dictionary<string, Expression<Func<Int, Int, String>>>.  This type is a mouthful, seeing it littered throughout code will distract from what is probably already some complex code.  The reader likely doesn’t care too much about the type, they care more about the behavior, so it’s a disservice to force them to read the type over and over again.

The var keyword can only be used in method bodies, therefore it will never find it’s way into interfaces.  It will forever exist only as an internal concern to a method.  So it has absolutely zero impact on outside code and is limited as an implementation detail.  It’s on the same level as code comments (except you can’t use var to communicate the opposite of what the code does, which I’ve seen done with comments).

In my years of development, I’ve never come across a scenario where I needed to know the types at play in a method body.  The compiler checks these things for me and tells me if anything crazy is going on.  Var lets me focus my code more on describing the behavior I want to produce, that means var lets me communicate intent more clearly, this yields more readable code.

Don’t be afraid of type inference.  It’s a tool there to help you.

Write code that doesn’t build – Compiler Driven Development

Anybody who’s worked with me for a significant length of time knows that I’m a fan of an idea I call Compiler Driven Development.  They know that I like compiler errors and encourage developers to start with code that may not compile and use the compiler to iteratively make better sense of it.

What is Compiler Driven Development?

Consider the case of a dictionary keyed off an object’s identifying property.  Let’s assume that property is an integer.  So we have a dictionary keyed off of an integer.  The problem is any integer will do yet we intend to only use the identifying properties of our original objects.  To use any other integer would obviously be an error.  In this theoretical code’s current form this error wouldn’t be spotted until the code is running.  Instead of time spent testing looking for interesting bugs, instead it’s spent discovering this.

Compiler Driven Development can be applied to this problem.  We can change the type signature of the dictionary to only accept our original object.  We can move the concern of figuring out the identifying properties of those objects into how we obtain their hash code and their notion of equality.  Now it’s not possible to use the identifying property of a different object to key off of this dictionary.  Testing effort doesn’t need to be spent on this (now nonexistent) issue and moves on to questioning more interesting aspects, such as performance and usability.

This situation can be expanded upon.  What would happen if this dictionary was used all over the application.  By refactoring the signature of the originating dictionary the compiler will tell you the exact location of the keying assumptions that were made.  It becomes a trivial exercise to find and correct them.  Without the compiler’s help the regression testing scope would be massive and this refactor may not be feasible at all.

Why?

Compiler errors, when done right, communicate that what was written doesn’t make much sense.  Without the build breaking these would be errors leaking out to production.  These errors might be immediately obvious or could be in some neglected backwater of the application where they’ll resurface a year after the originating changes were made.  A compiler can check that certain categories of  errors of the categories are not present in a large application and it can do it much faster than any human being could ever hope to.

There are ways around compilation errors where one can basically opt out of the warm protective cocoon of the compiler’s automatic error checking.  Thankfully these methods are very obvious, such as explicit casting or using strings instead of a more indicative type.  Whenever I see these, the first thing I think is, “here be dragons” because the compiler no longer has my back.

Staticly Typed Languages only.

This plays directly into static typing.  Languages which offer nothing more than syntax checking give very little protection.  I would never wish a million+ line large application written in Javascript upon anybody for this reason.  Dealing with legacy code in such an environment can easily become a nightmare as assumptions will likely be littered throughout.  These assumptions will be that a given method will return an object with the specified members.  The only way to mitigate this is to have extremely high unit test coverage where interfaces of returned objects are checked.  In essence, rebuilding static typing in the unit testing suite.  As statically typed languages get more sophisticated there will be more assumptions which dynamically typed languages must make which statically typed languages will have automatically verified.

This means over time unit test suites will shrink in size for statically typed languages.  This is a good thing!  It means less code that must be maintained with the same amount of safety.  It also means adding features requires less code to be written for the same amount of safety.  Both for maintenance and new features benefit from these language features in statically typed languages.

Static typing means I can guide other developers in how code is meant to be used.  StackExchange.Redis has a wonderful little class, called RedisValue, which is a good example of this.  RedisValue implicitly converts from a number of types.  Simply scanning the class definition I am immediately aware that byte arrays, strings, ints, doubles, etc are all easily supported.  If I try to cram an instance of my Foo class into Redis, I’ll quickly find out there’s no good conversion readily available and I’ll need to figure out how to map into one of the available conversions.  I find this out at build time, before I’ve run any code at all.  In fact, likely I’ve found that out before I even went to build due to type checking done in the IDE.

One tool in a box of many.

Compiler Driven Development doesn’t replace Test Driven Development, just like how Behavior Driven Development doesn’t replace Test Driven Development.  Both Compiler Driven Development and Behavior Driven Development improve upon Test Driven Development but do not replace it.

Compiler Driven Development is more about good tooling than anything else.  It’s about leveraging the work of the very intelligent people who have come before us and avoiding having to encounter the same simple regressions which they have proven avoidable.  It’s about having the freedom to make a tiny change in one location and seeing it ripple out before the users in production do.  It’s about communicating intentions behind the code in such a way that they are enforced long after you’ve moved on.

There are some things Compiler Driven Development cannot catch.  It cannot catch usability issues and performance issues.  It cannot catch issues which cannot be caught by the compiler (aka, they cannot be encoded into the type system).  It cannot catch erroneous implementations of algorithms.  I seriously doubt it will replace good testing practices in my lifetime, if ever.  It can however make those testing practices much more effective by reducing the scope to more interesting problems.

Compiler Driven Development is not, “it compiles so it must be right”.  It’s, “it compiles so I know that these properties hold”.  As staticly typed languages become more sophisticated we will be able to encode more and more of these provable properties.  Knowing how to leverage the type system becomes more and more important as the weight of code increases.

I do not envy those writing code for large Node.js applications.

 

P.S. Linting counts as a light form of Compiler Driven Development.  Linting still allows programs which may not work to be run in spite of the linting tool identifying the issue.

Motivation is a sucker’s game.

Lately the concept of motivation seems completely worthless.

Motivation allows:

  • Waiting for motivation to strike before starting. “I will when I’m feeling more up to it.”
  • Excusing one’s self from poor performance. “I just wasn’t motivated.”
  • A surrendering of autonomy. “I wish I was more motivated.”

In short, it seems the concept of motivation in relation to one’s actions reinforces passivity.  We wait for motivation to strike, we don’t introspect beyond a lack of motivation, we don’t ask ourselves what we really want to be doing.

Instead of talking about motivation, I prefer to talk about inertia.  When something has become a long standing habit it because difficult to break, this is inertia in action.  Starting new habits is difficult, it takes energy and effort.  Some times it seems motivation is what we label this energy and effort, but then we blind ourselves to the fact that it comes from within ourselves.

When confronted with a new effort to undertake, I look to harness inertia.  I block off some time and pick some small task.  The ordering is important, stake some time out first, more than is needed for the small task.  After that time is blocked off, I pick some straight forward task and start.  By the time I’m done with that task, I can see another task or two that need action and am already in that area.  It’s a short reach to start the second, then the third, then the fourth.  Before too long I’ve made that entire time productive, regardless of if I actually felt up to all that I accomplished before starting.

Inertia allows:

  • Acknowledging that the initial impetus is difficult and must be done. “I don’t know how to get through all of this, but I do know how to get through this small bit.”
  • A clearer picture of why one may not want to do a task.  It’s no longer about the nebulous lack of motivation but because other priorities jumped in line.
  • One’s actions are front and center.  If one wanted to take a break, they could and say why. “I don’t feel motivated” becomes “I’d rather not”.  There’s nothing wrong with not wanting to do something, as long as one is honest with themselves.

Playing the motivation game is more about giving up control and worthwhile introspection.  It’s better to just not play at all than to try to play better.  It’s a sucker’s game.

What’s a learning group?

Recently I started a Haskell Learning Group with a number of people from work. If you’re like me, or even most of the people I’ve talked to, you probably have some vague notion about what a learning group is. It’s odd how familiar a completely made up term can feel.

A learning group sits somewhere between a user group and a reading group.

A user group normally has some connotation that the members use whatever their specific shared tool in some capacity. Of course these group are open to people who are new to the technology, however it seems that is a secondary, almost incidental aspect. Commonly there are presentations done by members more experienced with the technology. These meetings offer freedom in what is presented by requiring members to volunteer themselves and a topic.

Reading groups form around either a specific text or a specific topic. Normally there’s regular meetings with an expectation that the members will do their own research so they can reconvene and discuss the material. The focus on learning is certainly more front and center than a user group however the meeting structure tends to be more predictable or rigid.

A learning group focuses on learning a technology more than a user group does. Specifically it is explicit in stating that it is intended for people who do not consider themselves very experienced. Presentations within the group are volunteered by the membership, similar to a user group, however it is expected that the person giving the presentation had difficulty immediately understanding the topic.

If you want to learn, teach.

Where a learning group differs from a reading group is in the distribution of material covered. While it is the norm in a reading group for everybody to be expected to cover the same material, a learning group is much more self directed. It is expected that members cover (or don’t cover) material at their own pace. However it is expected to be a common practice that solutions to exercises are swapped with other members to compare work. That work done may not be done by the entire group. The hope is there will be more cross-pollination of methods by not focusing on the same problem sets at all times.