Wednesday, November 11, 2009

PRES

Debugging tools have come a long way--who would have thought that with a push of a button you can step through code in different languages running across multiple machines in a seamless environment.  These tools are invaluable, but what do you do before you have a bug within a configured environment?  Enter the nasty no-repro bug.  These are the gremlins that all software developers dread--an issue either reported or confirmed, the detail of which we don't know.  How do we handle this?  If the error is reproducable in a certain environment, we might have the luxury of instrumenting the code temporarily, and incurring the high overhead associated, or we can just start guessing (see Speculation pattern ;-).

Uniprocessor debugging is hard because issues can be sourced across space (code) and time.  On a multiprocessor system, we introduce additional dimensions in which errors can occur, so we end up with something that feels intuitively like geometric growth of debugging complexity.

A tool like PRES is a welcome addition to the developer's troubleshooting arsenal.  I agree that bugs don't *need* to be reproducible the very first time in the lab, but especially if the replays are entirely automated, a small number of replays can easily be tolerated (and really, what other choice do you have if you can't withstand the overhead of exhaustive logging?).

Sources of nondeterminism that make this whole process difficult can be somewhat hard to reproduce, due to some of them being generated by low-level constructs like interrupts.  Virtual machine technology can help alleviate some of this by virtualizing things that were once relegated to pure hardware-controlled method, with limited possibility to control--now a tool like PRES could decide when things like interrupts might be generated.

Monday, November 9, 2009

Loop/Task/Graph

Loop Parallelism:

I haven't specifically refactored from sequential to parallel, but the approach in the pattern seems very logical.  If you have a stable build you can do unit testing by comparing the output of the sequential version to that of the current stage of evolution, and even dynamically generate test cases as you have a working (sequential) version.

I have used some tools in the .NET environment to profile the execution for performance bottlenecks, though my course of action in these cases was sequential optimization rather than parallelization.  I could see that some applications might not require such tools to isolate the best portion to parallelize, but I would think that the profiler is more "fact based", in that what you think is the bottleneck might not truly be one, and that you would likely make better decisions when assisted by tools.

I could definitely see that a bunch of different transformations/evolution iterations would be needed to see performance improvement.  Likely, early transformations would be adding overhead to support the first items of parallelization, which would be amortized over more parallel execution down the road.

Task Queue:

I think that a task queue could be an implementation level detail of a fork/join or divide-and-conquer algorithm.  The fork/join, for example, deals with how to decompose a problem into smaller subproblems that can be executed in parallel, and a task queue would be one way of executing these tasks once defined.

As far as the existence of source code, this seems like one of the patterns that didn't need it quite so much--I think everyone gets the idea of a queue, so probably just the variations from the standard conception would have been sufficient.

Graph Partitioning:

I'll have to admit, quite a lot of this pattern was beyond my realm of expertise.  It would seem to me that you would have to have quite a complex, dynamic (from execution to execution), and intensive problem to solve to warrant the overhead of this pattern.  Additionally, given the complexity of implementing it, you'd better *really* need that performance boost to risk the bugs that might creep into the program due to this complexity.

Tuesday, November 3, 2009

Task Parallelism/Recursive Splitting/Discrete Event

Task parallelism seems like a pretty straightforward pattern from its title.  Its content, however, is not quite what I expected.  It seems to be a summary of other patterns in many regards, and its actual content seems to be more of a performance tuning guide from an architectural level than a way of structuring a solution.  Most of my experience has been on the .NET framework on Windows, and I've often used the thread pool to achieve a task parallelism type of implementation.  The software that I have worked on has been generally utilized large grained divisions, so I haven't run into any situations where specifically better hardware support for task queue would be necessary, however, I can see that as the progression of computer speed is pushed to the parallel rather than the clock, and algorithms need to follow suit to provide the expected speedup, this type of hardware might become necessary to be able to leverage ever-more-parallel yet still diverse install base.

In the recursive splitting, the granularity of execution is controlled by the size of the base case of recursion.  In most parallel algorithms, there is a need for balance between perceived running time (in a real-time clock sense), and the number of total operations executed, as may be important in multi-user systems/servers, or when there are tight resource constraints. Finding the balance between these factors is something that's quite hard to do at the abstract level--different hardware with different parallel facilities will execute the same algorithm in very different ways, and therefore the right balance is likely dependent upon the hardware, load, and required responsiveness and resource limitation characteristics of the system.  Since recursion and iteration are essentially isomorphic, any algorithm that iterates over some data or performs a series of steps could be considered recursive, though not very naturally in some cases.  Hence, this pattern, taken indiscriminately, can be applied to a huge class of problems.

I haven't worked on a lot of "pure" parallel systems, so I would have to say that message passing, such as it would be viewed from a SOA perspective, is the environment I'm more familiar with.  I feel that messages are a pretty natural way to separate parallel executions, because it unclutters the interactions.  An example of a system where the ordering of messages matters might be in a banking application--overdraft fees are charged whenever a customer withdraws (through check, ATM, etc.) more than their account balance.  If messages were received out of order, overdraft conditions would not occur in the desired or consistent order.  With regards to deadlock detection, I think that it is a matter of the domain in which the algorithm is being implemented as to whether timeouts or deadlock detection is a better method.  If all executions are tightly bounded with low standard deviation running times, then timeouts would be quite appropriate, as the system would be quite certain that deadlock has occurred within a reasonable amount of time.  In other situations, where running times may vary greatly, and/or it's very costly to rollback a transactions unnecessarily, the cost of deadlock detection may be well worth it.

Armstrong Ch5

AND supervisors make sense when a bunch of processes are collaborating and building a collective state towards some goal.  In these cases, the supervisor couldn't restart just one of these processes, because it would introduce inconsistent state, and hence all children must be restarted if any need to be.

OR supervisors make sense when each processes being supervised is essentially independent, or at least where when processes communicate, they don't rely on any monotonic progression of state.

Erlang's method for restartability does not necessarily make restarts cheap, but it certainly provides a good framework through which the programmer can attempt to do so.  Providing the AND/OR hierarchical granularity certainly helps isolate the scope of restarts, but there could certainly be situations where a cascade of restarts occur, each at a larger granularity, until a total subsystem is restarted, where in a different model the programmer may have been able to know that an entire subsystem restart would be necessary, and therefore bypassed all of the cascades of restart.

Using the process/supervisor model keeps the programmer aware of the unit of restart that their code may be subjected to, and therefore it allows for the system to be implemented in such a way that restarts can be enacted at multiple levels--more naive approaches wouldn't necessarily have the clarity of separation to support this.  Because of this, faults can be isolated to a certain process and its supervisor, and therefore not unwind the stack to some arbitrary point, and hence fault tolerance is greatly enhanced.

A rejuvenation model for fault recovery could certainly reduce some of the overhead associated with restarting a big AND group, but it's possible that any benefit to be gained by this would be nullified by the additional complexity involved in supporting the rejuvenation process.  Also, I could imagine that rejuvenation isn't as robust as a restart, and therefore additional errors may be introduced by trying to support such a model.

Erlang prescribes an adherence to specification such that if any deviation is detected, and exception should be thrown.  This is contrary to the all too common reality that a programmer might guess and/or impose their own personal beliefs of how the system should operate, complicating the matter.

Providing a good mental framework, even if not a concrete software framework, for managing the supervisor/task relationship goes a long way to influencing the way a piece of software is built.  As such, even though the supervisor logic doesn't come "out of the box", adopting this paradigm still provides a solid separation of concerns which is well suited to the task at hand, and therefore is superior to more traditional organizations.

I don't see that Erlang's model for restarts is that much different from the crash-only model, except for the granularity involved--whereas the crash-only model is defined at the "component" level, Erlang's model decomposes this into a granular hierarchy, each piece of which does essentially the same thing.  The Erlang model benefits and suffers from the phenomenon I described earlier, where a cascade of restarts may be undertaken, ultimately resulting in whole-subsystem restart.  In this case, the crash-only system would have been faster, but in most cases, there is some distinct scope beyond which an error doesn't persist, and thus the Erlang model provides a better way of handling things.

Wednesday, October 21, 2009

Armstrong Ch2

Joe Armstrong's definition of what descriptions a software architecture should be composed of are quite good a characterizing a system.  One thing that I think should be moved out of the problem domain section into its own is that of performance constraints and requirements.  In this case, the problem domain clearly states that a telecom system should exhibit certain behavior.  But in other cases, the problem domain might not be so explicit--a system designed to revolutionize an industry by bringing priorly unheard of speed to solving problems might not characterize the performance requirements in the problem domain--this would be more an attribute of the system that we are trying to build to suit the problem domain, rather than an intrinsic property of it.   Required performance guidelines are certainly a central tenet of how a system should be architected.

Messaging for parallelism makes a lot of sense.  It helps to reduce the affect of unnecessary assumptions that can be easily imposed by shared memory type systems.  I have worked with a number of web services oriented applications which essentially use messaging, and it certainly does enforce strong separation.  However, in most of the systems I have worked on with web services, the calls have been blocking in nature, hence no parallelism gains were realized.

"Fail fast" kind of scares me.  The idea that an entire process should terminate its execution upon reaching an error case leads me to believe that performance could be detrimentally affected.  For example, in the case that a process uses a great deal of cached data, assumedly this cache would be flushed when the process "fast fails", and therefore there could be quite a bit of a performance hit by reloading this cache each time the process restarts.

I think that concurrency oriented programming makes sense in certain problem domains.  In others, I could see this style as an impediment from getting the job done.  I suppose that as the supporting tools for this type of programming evolve, the barrier to entry of this programming style will be reduced, but it seems to me that unless concurrency is priority one in the system, adopting this model would potentially make more work for you.  Having said this, more and more systems are setting concurrency as priority one (especially in the server software world), so I am by no means discrediting this style.  Rather, I am proposing that it should be adopted judiciously--only when concurrency is of high enough priority to warrant the overhead.

An unreliable messaging system makes a ton of sense to me.  Just look at how the internet has evolved, and worked well (arguably, I guess) for a wide variety of applications.  To impose a reliable messaging system would be to burden certain types of communication with overhead that they don't require.  Furthermore, as reliable messaging can be built on top of unreliable messaging with proven techniques, I believe that this is the best choice for a messaging system.

Monday, October 19, 2009

Map Reduce

The Map Reduce pattern is quite similar to the Fork/Join pattern.  However, whereas the fork/join pattern recursively subdivides one problem into smaller ones, and combines the results when the stack unwinds, the Map Reduce pattern operates over a large list of independent problems, providing a mechanism to gather results.  The primary difference, then, being whether the units of work to be completed are decompositions of a large monolithic problem, or samples in a search space--independent from one another, but culminating at a solution.

Errors should be handled in a configurable way, allowing the application developer to specify behavior on a case by case basis.  On some problems, failure of any subproblem may prevent the problem from being solved, whereas in, say, a monte carlo simulation, the failure of one task may be essentially inconsequential, presuming that the failure isn't related to a flaw in the underlying model.  As such, the application developer would like to either ignore the error, schedule the task for re-execution (in case the error was transient), or terminate the computation.  We certainly wouldn't want the framework making a decision to terminate the computation, potentially losing all existing computation results, due to a small anomaly in one task.  Hence, a configurable error handling system would be the only way to make a framework general-purpose enough.

I haven't ever used this pattern, and I actually had to a little bit of Wikipedia-ing in order to get a better idea of real applications.

This pattern excels when the majority of the work to be performed is in the "map" function, to be executed independently.  When there are a great deal of interdependencies in the "reduce" function, the pattern won't scale well, as reductions may need to be deferred until other mappings have been performed.  This case may be outside of the scope of the pattern, but if so, it can be added to the list of weaknesses of the pattern.

Event Based, Implicit Invocation Pattern

The "Event Based, Implicit Invocation" (EBII) Pattern is one that is so incredibly common, it's almost redundant to document it as a pattern, but nonetheless, for completeness, it can prove useful to understand the differences and constraints in implementing it.

The key difference between this pattern and the Observer pattern is the cardinality and loose coupling between the sender and receiver.  Whereas in the Observer pattern the "publisher" knows its "subscribers" and issues notifications, in the EBII pattern, the publisher knows nothing about its "subscribers", but rather just knows its own notification mechanisms.  These mechanisms interface directly only with a manager component, which is responsible for distributing these notifications.  Additionally, whereas the Observer pattern states a "one-to-many" cardinality, the EBII pattern says that any object can publish events and any object can subscribe to events.

When it comes to dispatch methods, implicit invocation provides greater decoupling than explicit invocation.  By using explicit invocation, the pattern is considerably closer to the Observer pattern.  An additional dimension of decoupling comes from using nonblocking calls.  If blocking calls were to be used, the notifying object's timing would be affected by the receiving object's handler execution to a considerably greater degree than in the nonblocking case.  Obviously, the event handler will still utilize system resources, but this won't as fundamentally affect the timing semantics inherent in the program.

Applications of this pattern are so prolific (as previously mentioned), that the author named wide classes of programs using this pattern (such as all clients registering over a network to receive some sort of notification), that it's hard to think of an example that doesn't fit under the umbrella provided.  RSS feeds would fall under this categorization as a utilization of this pattern.

As is explained in this pattern, the manager component has everything to do with how the event notification system scales.  As such, it would be inappropriate to implement a notification broker for all types of events.  Imagine the same system being responsible for handling IO Interrupts as RSS notifications--the requirements of performance and complexity are so divergent that one system could not be expected to span this expanse.

Error handling in an event based system is the responsibility of the event receiver.  If an event sender *really* needs to know about a failure in another component, it should register a listener for the receiver's error event.  This introduces a bit of a bidirectional constraint, but still maintains a great deal of decoupling.

Sunday, October 18, 2009

Chess

I haven't had a great deal of experience with testing parallel programs.  As I believe I've stated in my prior blogs, most of the applications that I've built have used totally isolated processing threads, accessing common resource only through a database server--where all but the general awareness of concurrency is abstracted away.  In the couple of instances where I have tested the concurrent nature of these programs, I've generally only used stress testing.

While Moore's Law may be one of the most widely known conjectures related to computing, I'd argue that it is still trumped by Murphy's Law: "Anything that can go wrong, will go wrong".  When we write test suites for an application, we're hoping that the "will go wrong" will rear its ugly head in the very finite amount of time allotted.  Even stress tests running on large farms of machines pale in comparison to the diversity of executions that will happen "in the wild", across an expansive space of hardware and software platforms, user interactions, environmental factors, and configurations, and possibly across years or even decades of deployment.  Hence, even if a stress test has run for weeks without failing in a test environment cannot purport to truly capture all behaviors.  Additionally, since any code change, either in the application, or in the platform, can totally invalidate a stress test, even the most exhaustive practical stress test is only good for as long as the identical bits are in play--and we all know how quickly requirements change and bugs are found.

The small scope hypothesis is quite an interesting proposition.  To be able to bound the complexity of test cases is certainly a desirable trait from the perspective of a real-world tool, such as CHESS.  I don't know that I can offer any specific critical analysis on this subject, but would rather argue that regardless of any formal proof of such a concept, code is written by people, and to the extent that they are unrestricted in the concurrency structure of the code they write, they will (again, with the Murphy's Law!).  Hence, it's my belief that only through empirical study will we find out what percentage of bugs meet the criteria of this hypothesis with given parameters.

I could envision a monitor on top of CHESS that would profile the efficiency of various interleavings of execution, such that if certain interleavings are found to be particularly efficient, design work can be undertaken to try to guide the execution towards such interleavings (such as monitoring how locking patterns occur, the duration of thread sleeps, and the relative priority of threads).

The semantics of synchronization primitives being misunderstood is minimal to the extent that the conservative option is chosen (in the happens-before graph).

Saturday, October 17, 2009

OPL Iterative Refinement

I don't believe that I have ever used this pattern before.  This is probably due to my professional experience being mainly confined to data management type applications, but perhaps because of this I'm an ideal candidate to critique the understandability of this pattern.
I understand what this pattern proposes, but what I am less clear on is when I would use it.  The pattern talks in abstract terms about the applicability, but this is perhaps a pattern that would benefit from a more concrete example.  Certainly, if I understood my problem domain to look like one of the examples, it would be obvious that this pattern could apply, but my concern is that in applicable cases that are described in terms different than those of the pattern, it might not be obvious that this pattern could be of use.

OPL Layered Systems

Similar to the OPL Pipes & Filters pattern, the OPL Layered Systems pattern is considerably simpler and quicker to grasp than the previously presented pattern.  The previous pattern pointed out (and the OPL pattern omitted) that designing a good error handling system can be difficult in layered systems.  The OPL pattern made more explicit notice of the performance implications, and was more prescriptive in defining how the number of layers should be managed, whereas the previous presentation simply stated that "crossing component boundaries may impede performance".  I'm not sure if it's a strength or weakness of the OPL pattern, but the previous presentation details how to go about deriving a layered architecture, whereas the OPL pattern does not.  I guess this would be an argument for what the contents of a pattern should be.  It's been my feeling, however, that a pattern should read like a short encyclopedia entry--giving enough information to understand the basics, and deferring specific non-core details to other texts.  The OPL pattern does this, whereas the previous presentation goes into considerably more depth.  This may be due to the difference of one being an online library of patterns and the other being a chapter in a book, but to describe the best way to get a grasp of a large number of patterns, the former is more expeditious.

OPL Pipes & Filters

The OPL version of the Pipes & Filters pattern is definitely simpler and easier to understand than the previous description.  Part of this is due to the fewer detailed examples.  The first presentation of this pattern uses a rather complex example of building a programming language, which in my opinion clutters the essence of the pattern.  The few short examples in OPL, presented *after* we understand the pattern to a large degree, provide enough detail to grasp the purpose and types of applications for this pattern, and I believe that this is the extent of what a pattern should be--it's not supposed to be an exhaustive reference for the entire body of knowledge relating to the subject, but rather concise enough to be "thumbed through" when searching for a pattern to fit a design need.

The OPL pattern ignores the detail of "push" vs. "pull" pipelines, which in my opinion is bordering on being too implementation specific to be included in the high level pattern.  It excels, however, at describing how the granularity of pipe data should be managed to exploit optimal buffering and potentially, concurrency.

Friday, October 16, 2009

Classics

Smalltalk's treatment of all constructs as instances of a very few primitives makes it a fundamentally very simple language.  This is not to say that it's necessarily easy to express an idea in the language, but rather just that the language itself is compact.  Using these constructs alone allow it to reason using a common fundamental model, rather than having to worry about the semantics of a plethora of language constructs.  Messages, specifically, make it easier to perform code analysis related to interactions of code blocks, and therefore make it easier to express algorithms in a way that will be more conducive to parallelization.

Programming and design using inheritance is generally a mixed blessing.  To the extent that the system grows in the anticipated manner, inheritance works great to encapsulate functionality and reuse code.  It's when one or more of the fundamental assumptions of the inheritance model are challenged that the headaches begin.  At that point, you either have to rework your entire inheritance hierarchy to be consistent with the updated model, and incur the risk of breaking something, or hack in an ill fitting subclass which breaks the fundamental reasoning about the system, but not the implementation--either way, this breaks something.  I think in very well understood domains, inheritance works well, but in arenas with rapidly changing requirements and/or models which are not very well understood, inheritance leaves something to be desired.

Regarding dynamic dispatch, this is essentially a similar method to that used in C++ inheritance, using a virtual method table.  It's just another level of indirection in memory, and we've seen time and time again levels of indirection added to software to solve problems of complexity, so I don't feel that this one additional layer would do much to change the overall scalability of the software.

The open nature of Smalltalk classes violates the principle that a system should be "open to extension, but closed to modification".  While it's useful to ensure that no arbitrary constraints are put in place, it could certainly pose problems for maintenance.  Whereas in other languages you might declare a class as "final" or "sealed", such that you would be free to modify its structure whilst maintaining the external interface, leaving classes open to modification would make these types of changes more likely to break other code.

While the chapter argues that compile-time checking can garner a false sense of security, it has been my experience that when dealing with applications heavy in definition/data, and dealing less with complex algorithms, static typing and design/compile time tools/checking can go a long way to making an application sound.  While it doesn't ensure that the program executes without error and generates the correct output, it at least ensures that the types of operations that are occurring are semantically valid, and by building a smart type structure in the program, this can provide a great number of advantages.  Metaprogramming requires good code coverage from a test suite before it can touch the kind of whole-program checking that statically typed languages can offer.

Thursday, October 15, 2009

Reentrancer

Reentrancy is important in order to allow for safe concurrent execution.  Being reentrant, we can know that an execution of a program will be totally independent of any other execution that may be running, and therefore assures us (to whatever degree possible) that we can scale this application, such as for handling concurrent requests of different users.  We can do this without performing additional analysis to verify that the application will function as expected.

I believe that this refactoring is sufficient to create reentrancy.  To think about why this is the case, it's useful to think of how different threads of execution might come to "know" or influence one another.  When a thread/execution starts from a blank slate, it has to obtain access to resources in order to do its work.  It can either create objects of its own, or access shared state, represented in the OO world as static constructs.  Objects created by one thread cannot be seen in other threads unless a reference is somehow communicated.  This communication can essentially only happen through static constructs--hence the differentiation between mutable and immutable static constructs.  Immutable static constructs, once created, are essentially harmless as they are read-only.  Mutable static constructs, however, would open the door for this type of troubling communication.  By moving these type of constructs to a thread local type storage, each thread thinks that they are accessing an application-wide static construct, where in reality, they are accessing only their own copy, so in this way, threads are almost "tricked" into being reentrant.

Unfortunately the real world is a bit messier than this.  When it comes to libraries/system calls, external state can find its way in.  Whether it be an externally managed singleton type resource, or some externally shared medium, problems can arise.  Though it makes this tool more cumbersome to use, I think that warnings about library functions is about the best that can be expected at this point.  The semantics at the source code level are pretty straightforward, but when a library call may delegate responsibility to any sort of platform dependent implementation of some function, a tool like this couldn't expect to be able to analyze all possible cases.

I think it's pretty clear that reentrant programs are thread safe--the programmer has gone through a great deal of effort to effectively sandbox each execution so that it can't substantively interact with any other execution.  Thread safety alone, however, generally implies that resources are being shared and that attention is being paid to how these interactions occur, but the bottom line is that they still occur, hence reentrancy is not guaranteed.

Wednesday, October 14, 2009

ReLooper

I've never touched Fortran, but from what I understand from a little Googling, the loop parallelization in Fortran was probably largely related to declarative data parallelization.  Modern languages such as Java have considerably more complex semantics in common usage.  Apparently in Fortran independence between these operations was considerably more common, whereas in OO, sharing of objects makes this harder.

In terms of usefulness factors, I found it quite complete from an abstract perspective.  If I were to use a tool like this on a commercial project, I would want a way to know what kind of benefit I might expect to see at a whole-program level.  Even if it were a considerably simplified algorithm, to at least know that there are X potential refactoring opportunities would be beneficial.

For safeness of concurrent execution, while I didn't understand some of the notation in the analysis, I can think of a couple ways in which it would be hard/impossible to know about safeness.  Any place involving dynamic binding would obviously cause an issue with "static analysis" (obviously, hence the name), but I could see that some applications that make heavy use of such methods would have essentially no use for such a tool.  Additionally, I'm not sure if its possible in Java, but in .NET there is an interop layer with unmanaged code, where you would have no facility available for code analysis.

Tuesday, October 13, 2009

Functional OO

Let me preface this by saying that my only foray into functional programming has been through a languages & compilers course as an undergrad, so my experiences are limited.

I feel that this chapter was quite heavily biased towards OO techniques, but though it certainly colored the conclusions drawn, I find it neither inaccurate nor particularly unfair.  The problem domain clearly emphasized structure over compact expressiveness, so within this problem domain, OO has a clear advantage.  Functional languages excel at expressing a fixed set of computation in a certain way, as well as extension through substitutability of chunks of functionality.  Functional languages benefit from a reduced set of ways to express some piece of computation.  Whereas in OO, there are structural decisions that affect how a system will be composed, there are fewer such decisions in functional languages, leading to more unified conventions.

The aforementioned structural decisions in OO are, however, one of its greatest advantages.  By encoding a great deal of the problem domain knowledge into the structure, the programmer can plan for future expansion and enforce a much stronger separation of concerns.  In my opinion, OO is a better technique for any problem domain dealing with data, behavior, and variations of these.

While functional languages gain the benefit of mathematical models which automatically facilitate a certain amount of architectural reasoning, OO benefits from the experience of the rest of the world, in that it can be used to more closely model real world things and scenarios.

Tuesday, October 6, 2009

Concurrencer

When deciding whether to parallelize an existing sequential application or to re-architect for parallelism, I think it is of great concern to consider whether the underlying algorithms and structures are sufficiently decoupled (in algorithms, iteratively decoupled), such that parallelization would be practical and useful.  In applications using algorithms where each iteration explicitly depends on the next, and where there is little or no "fan out", the underlying algorithms should be inspected to see if the solution can be formulated in a different manner, and therefore parallelization through refactoring should be decided against.  On the other hand, however, if the structures and algorithms exhibit good decoupling, parallelization through refactoring may bring great results, and thus should be further investigated and possibly undertaken.

Parallel libraries are of a great advantage to the programmer for several reasons.  Firstly, they abstract a great deal of the complexity of concurrent programming away from the developer, and therefore let the developer work with a simplified/abstracted model.  This leads to a decrease in subtle timing issues/bugs related to hard-to-test situations, arising from lack of complete knowledge by the developer.  They additional provide the benefit of disseminating knowledge about parallel patterns, by encapsulating relevant functionality.  This provides a library of paradigms which will become known to the developer, and therefore provide the developer with a variety of new perspectives from which to model their algorithms.

When it comes to semi-automatic vs. fully automatic refactoring, I believe that there is an appropriate place for each.  To the extent that exact semantics can be ensured, a fully automatic approach would be preferred, as it keeps the codebase simpler and more to the point.  In the event, however, that the semantics of the application would be changed, no matter how slightly, it would be best to place this control in the developer's hands, as ultimately, they must decide what the application must do, and stand responsible for its operation.

Cluttering of code due to parallel refactorings are certainly an issue when it comes to maintainability.  I believe that as languages become more expressive, and essentially more functional, the code will be more conducive to fully automatic refactorings, which can happen in the compilation process, as opposed to at the source code level.

Not so much a refactoring, per se, but in order to achieve parallelism in a number of my applications, I use a SQL Server database, which has a great deal of internal parallelization built in, and then I try to formulate my solution in terms of operations across this database.  In this manner, I can gain a great deal of parallelization over a purely sequential program.

Another factor i would have liked addressed is an analysis of how these parallelizations affect total system scalability when faced with a large number of copies of the same algorithm running--i.e. if different users were running instances of the algorithms on a shared system.  Perhaps this would be able to be accomplished by providing a single core benchmark, to show the overhead of the parallel refactorings.

What a Bazaar Cathedral that is!

From my experience, newer technologies make building software, simpler, quicker, less error prone, and more feature rich.  Therefore, when a new subsystem or fundamental paradigm becomes available, the developer must take a good look at what that change will do to their software product.  In many cases, the maintainability and performance of migration are reward enough for the effort, but even when they are not, the developer needs to look towards future requirements, and the difference in how they would be supported under the old and new subsystems.  It's my philosophy to upgrade to new frameworks (such as in this case Qt3 to Qt4) as soon as they are stable, and at times, even delay the implementation of new features while awaiting the release of such a new framework, as a general rule.

As I already mentioned, new frameworks often make things easier to do, more stable once their built, and easier to understand and maintain. If a team were to ignore such innovations, and plow headlong into features without considering the new framework, they'll find themselves with an obsolete codebase sooner than they think, as the updated framework makes much of the work they have done redundant.  I furthermore embrace the update of said frameworks as an opportunity to challenge existing assumptions, perform a rather detailed review of the system as it stands, as well as architecturally plan for future needs--essentially as an opportunity for a large scale refactoring.  Even in "cathedral" style projects, requirements change, technology changes, people change, and updated frameworks are a great driver to readdress the architectural and quality concerns of the system.

I feel that the bazaar type project falls in the middle of the spectrum in terms of efficiency and output quality, as compared to well managed (and understood) cathedral projects, and poorly managed cathedral projects.  The Bazaar certainly has the advantage over the poorly managed cathedral, as the workers can stop working on a portion of the software that they are certain are doomed to fail, for whatever reason, and can redirect effort elsewhere.  This freedom to do so, however, would be detrimental in a well managed cathedral project, where perhaps its hard to communicate the full scope and applicability of a portion of the software to the developers.  A certain portion of the application may fill great needs for HR and Accounting personnel, but if the programmers aren't interested in these fields, and therefore don't see the need, this portion of the user base would be alienated.  In this manner, a (well managed) cathedral project manager could better allocate resources than an open project would, by simply working on what is interesting.

Overall, my feeling is that a bazaar structure works very well when the user base is very well in line with the developer base, and that the total number of hours worked is not of prime concern.  From here, I shall refer to the cathedral style meaning a well managed cathedral project.  However, I feel that the cathedral style is much more effective when there are silent user bases, or maybe user bases that are sufficiently unskilled to foresee the use of the software to the extent that they fail to participate in its design.  Cathedral style also benefits when there is concern with the productivity per developer hour spent (such as in a company desiring to minimize costs of constructing said software).

The real strengths of the bazaar is that anyone can contribute.  This benefits the project when people with unique skills, perspectives, and goals join a project, and are then able to contribute in a way that the existing developer base would have been unable to.

Wednesday, September 30, 2009

Fork/Join

Multi-core processors are here to stay.  While not really general purpose, I was amazed by the 216 cores (stream processors, in this case) per card on each of the 2 new nvidia 260gtx cards I just got...this just goes to show that dividing parallel algorithms will separate the men from the boys here in the next few years.  When we talk about dual or even quad core machines, a programmer could generally get away with a single threaded algorithm for all but very large computations, but as we're driven towards massively increasing numbers of processing cores, the experience demanded by users at that time won't be feasible in a single threaded manner.

There is always a chasm between ease of implementation and maximum performance with such technologies, but I believe that user friendly general purpose frameworks with pluggable/configurable performance enhancements will make it easy to start with a simple implementation, and then as requirements demand, move towards performance tweaks at the framework configuration level.

On type of system that I can think of which does something like this is in a SQL database platform.  The tools/language present an interface conducive to parallelizing a workload through use of non-prescriptive, declarative SQL statements.  Then, later on, users can optimize performance by tweaking indexes as well as providing execution hints.  While it is true that a totally custom solution for handling the same problem could achieve better performance than the best possible implementation in a SQL database, the performance is usually close enough to be acceptable, given the reduced programmer effort.  One drawback of ANSI SQL is the inability to explicitly invoke recursive calculations/logic, but Microsoft has started to address this, albeit only in very specific and simplistic situations, through use of Common Table Expressions for recursive JOINs.

Moving back to the Java Fork/Join framework, I can see that this would be useful when the algorithm can be decomposed with the exact semantics supported.  I don't have much experience with parallel algorithms, so I don't really know what percentage of algorithms are efficiently expressible in this manner, but this would be a major factor to the widespread adoption of such a framework.  Even if an algorithm is expressible in this form, if it is considerably less intuitive and therefore harder to maintain, this could be another barrier to adoption of such a framework.

I would cite the widespread popularity of high level, GC enabled languages as reason enough to not concern oneself too much with the language level performance constraints.  Inevitably, if such languages stay popular, the overheads incurred will be worth the benefit derived, else the community would either optimize the language (or VM), or move to another solution.  I don't think that this particular framework is so different from any other performance consideration to warrant special concern.

The work scheduling/stealing algorithm could lead to suboptimal performance if a task is stolen that requires a great deal of data to do it's work, but who doesn't require a great deal of computational time, specifically in the case of a Non-Uniform Memory Architecture (NUMA) system.  In this scenario, a disproportionate penalty will be paid to move the data from one thread/processor's context to another, without the desired boost in parallel execution performance.

Tuesday, September 29, 2009

Emacs/Featurism

Emacs is one of those systems where I've never quite understood the appeal.  I'm sure this is because I've never used it for more than open/simple edit/save, but still, it seems like a relic from an age long since past.  I think that its success can be attributed to (in addition to the architecture) the highly technically competent user base, along with the lack of other tools to get similar jobs done.  This created a bit of a vacuum of power, and acted like a lens, focusing the technical expertise of its age upon itself.  If it were not open source, I don't think that a system like this could evolve.  Firstly, it would be scarce to find an organization with enough technical expertise across a large enough base of users to get beneficial functionality built.  Secondly, as we have learned, software tends to model the organization that built it, so likely an organization large enough to have the necessary complement of technical expertise would also likely have a great deal of bureaucracy, leading to software with a great deal of bureaucracy, and therefore considerably more difficult interface contracts.  Additionally, smart (in the traditional sense) organizations try to reduce duplication of effort, so they would likely not encourage experimentation to the degree necessary for fear that they would be paying two people to do the same thing twice.

The architecture of Emacs was very forward looking for its time.  In an age where static languages and mainframes were king, Emacs dared to build an expressive and remarkably high level controller language in the form of Emacs Lisp.  Emacs benefited from the simplistic terminal model prevalent at the time, in that complex user interactivity (such as is demanded today) was not common, so a very simple model could emerge.  Furthermore, the pace of technology at that time was such that vast arrays of tools could be built upon that model with little demand for change in the underlying model--hence it had a stable platform.  These attributes are turning into disadvantages in today's society, where advanced user interactivity and graphical interfaces are demanded.  Emacs has stretched its usefulness far beyond what I would have foreseen, but I think its about at its breaking point.  As the user experience at large continues to become more powerful, glossy, and user friendly, the sharp learning curve that Emacs presents, along with its antiquated model will inevitably lead to a distinct lack of future demand, and therefore a slow path to obsolescence, as its aging user base exits the mainstream of software development and technology.

As I have stated, Emacs benefited from a very simple model.  This worked for Emacs, because at its conception, the very simple model, along with a small complement of enhancements, was all that was warranted. If we were to start to build a system today aimed at providing a feature set comparable to Emacs, it would be totally impossible to start in the same way.  Sure, MVC is still a good choice, but users (of Emacs) accept its primitive interface as the result of a long evolution, and a new application would not gain such a nostalgic acceptance.  Today's interactivity models require fundamentally more complex abstractions, and therefore growing a system from such a primitive design wouldn't work.

Avoiding complexity in implementation is a touchy subject, something that I have been grappling with in the past few weeks.  As I am designing a system that requires a great deal of overall complexity, the primary questions are: How complex does the design need to be? and Where should this complexity lie?  One concern that I've had recently is that as you iterate through an application's design, one has the tendency to shovel complexity around, in order to reduce the complexity of the current subsystem.  If not iterated on sufficiently, the last subsystems to be visited will end up resembling the "Sweeping it under the rug" pattern, from the "Big Ball of Mud" paper.  Whereas if complexity had been accepted at each level to the extent it deserved, the system would be balanced in complexity, and separation of concerns would be more clear, the desire to simplify each subsystem to the utmost leaves us with one or two complexity catchall subsystems, where the dirty laundry of all other subsystems are quietly marshaled, where they fester into an ugly tumor of the system.  This is not to say that we should make systems complex for complexity's sake, but rather to not fear complexity at each step of the way.

As for Firefox replacing Emacs, I suppose that at some level, it already has (or at least the whole class of web browsers have).  As they expose an MVC type framework upon which entire applications are built, the applications are becoming more and more like Lisp functionality providing the majority of the functionality in Emacs.

OPL

The OPL paper groups patterns into a hierarchy, targeted at the level of abstraction, and therefore the proposed target audience of the pattern.  As is alluded to, however, many developers must be "tall and skinny" in today's world where appropriate frameworks are not available.  This organization may prove more effective as time progresses, and more relevant frameworks emerge.

As for the granularity of the patterns, I think it would be premature to start to generate a detailed encyclopedia of parallel patterns, as the applicability of patterns at a pragmatic and proactive level hasn't yet been addressed. OPL forms a good categorization of these patterns from which further refinement can be tackled.

I think that parallelism in general has been overlooked by application-level programmers for years for a number of reasons, some relating to skill, but many relating to business pressures.  Often times very aggressive release deadlines coupled with extensive feature sets, taken in conjunction with uncertainty about the future of the software lead to a mentality about software architecture that I would paraphrase as the following (perhaps to be called the pragmatic and overworked programmer's manifesto):
I have nowhere near enough time to build this system "right".  In fact, I'm not sure that I really know what "right" is anymore, given the eternal chain of short-deadline projects that I've worked on over my professional career.  I've developed my skills to build systems at a breakneck pace, giving little thought to the overall performance or long term viability of the application's architecture.  Since programmer time is expensive compared to the ever cheaper machine time, management doesn't want me to spend any time "optimizing" something that they feel they can just "throw more hardware at".  Furthermore, no one really knows if this software is going to see version 2.0.  If I don't get version 1.0 out the door, we certainly won't, and perhaps regardless of my best intentions, this program may fail to meet the users needs, or the users may not have had these needs to the extent to make this system viable in the first place. Oh well, if it's a success, we can always go back and "make it right"...or not...who wants to mess with code that's "good enough".  Until a competitor comes along that shows the users that this type of software can be way faster than our's, the users will generally learn to accept the performance that this software offers, so why waste time parallelizing it?
 The problem with the above manifesto is that although time-to-market can make or break a business, lack of scalability can do the same.  If the software succeeds, by the time the programmer goes back to "make it right", they'll have users beating down their door for more features.  In the end, if they really *have* to optimize through parallelization, they'll likely find a hack of a solution, which doesn't work in the abstract case, but causes few enough issues to suffice.  This will make the application brittle to future changes.

It's my opinion that as frameworks evolve, and programming languages are made more expressive and declarative, more parallelism will be implicit.  At some point, these frameworks for parallelism and overall application architecture will become powerful and easy enough where a break-even point will be reached for programmers to either implement the system in a break-neck, no time to think about architecture manner, and  a paradigm where most of the functionality falls out of having the appropriate architecture.  At that point, we'll start to see a migration towards parallel frameworks.

Parallelism has become a more recent problem as Moore's law has stopped applying principally to clock speeds, and moved towards concurrent power.  All of a sudden, developers who were banking on the next generation of clock-speed-faster machines to make their single-threaded software run faster have been left out in the cold to wonder about how they can retrofit parallelism--not an easy task for a complex system.

Metacircular JVM

Metacircular VMs provide a very interesting model for reusability as well as performance.  By implementing the VM in the language hosted inside of the VM, there is a co-evolution of features in the VM and in applications, and the VM itself can benefit from the features it is intended to provide.  It's a bit hard to wrap your head around--a bit of a proverbial chicken-and-egg at times, but the bootstrapping process works out these kinks.

This bootstrapping process is probably one of the only disadvantages of such an architecture.  Whereas a VM built in natively compiled language would "just run", there is a complex process of image generation and layout that has to be worked through.

As for the threading model, it would certainly be advantageous to gain advantage of the maximal threading performance available, likely through the thinnest abstraction from the kernel as possible, but as is mentioned, there are some circumstances where the JVM knows more about whats going on (such as the mentioned "uncontended locking), and therefore can achieve greater efficiencies.  Therefore, I believe that a pluggable threading model is advantageous, such that the JVM can be tailored to the situation when necessary.

Wednesday, September 23, 2009

AOM

I think that AOMS is a terrific architectural style!  I've developed one system in the past that was a simple, but near textbook instance of such a style.  The decision to build the system this way was based upon the reality that due to politics and indecisiveness of the client (don't get me started!), the schema for a traditional data management solution couldn't be decided totally up front--the requirements would shift drastically from day to day.  I experienced the true power of such an architecture when I was able to update the domain model, and along with the automatic web UI generation that I had in place, have an updated application ready for review within 15 minutes of the conclusion of the conference call detailing said changes.  I'm also currently developing a much larger system, which exhibits some much more advanced concepts from this architectural style, designed to allow the system to be re-configured at runtime to account for changes to rules and data both in the existing sub-domains, but also to extend to (similar), but yet to be thought of domains.

One of my primary concerns as systems like these grow is performance.  I'm using a relational database backend, which has been optimized by the vendor to provide very efficient operations on sets of data defined in database schemas.  When you model the domain at a higher level of abstraction, you break a great deal of the anticipated locality of data, and spread out reads across a much larger set of rows.  Time will tell what type of optimizations will be necessary to keep a system like this performant as it scales to a large number of concurrent users.

For an Object to change its TypeObject, it may be necessary for some translation logic to occur. You can imagine a scenario where such a conversion may fail--if the destination TypeObject has a different set of properties, some required properties may need to be provided, and existing properties may be lost if they don't adhere to the specifications.  Additionally, there may be business rules relating the existence of various objects, so perhaps the lack of existence of the source object would be a problem...perhaps there are cardinality constraints on the destination object, and even more confusing, what if there were validation logic that could only be executed at a certain point in time...perhaps a function that depends on the current date/time.  In this scenario I don't know that you can say in the general case, an Object CAN change its TypeObject.  In absence of other constraints, sure, its just a mapping of properties, but the real power of such an architecture is to be able to model complex rules, and not know about these rules at compile time.

Versioning in an AOM architecture comes in two flavors--data versioning and model versioning.  Data versioning can be handled by a common set of functionality across all objects by storing updates as new values, with associated timestamps, and upon retrieval, always retrieving the most recent data.  Model versioning can be achieved through translation of "V1" domain objects to "V2" domain objects, or simply by allowing these two different versions of the same type of object to exist in the system at the same time.

The "explosion of new types" won't happen at the code level, due to the nature of modeling the domain in data, but it can happen in the model itself.  Controlling the growth of the model is as simple as controlling the users who are allowed to change the model, providing adequate training to these users so that they implement domain entities in the desired manner, and ensuring that appropriate communications channels are available, so that functionality isn't duplicated across users and/or time.

JPC

Emulation has advantages over VMs because the host machine's hardware architecture does not have to be the same as the guest machine's hardware architecture.  They do, however, suffer from inferior performance (as compared to VMs), due to the additional layer of indirection and translation involved.  Being implemented in Java, JPC provides the user with more confidence in isolation/security than a native C++ emulator, because of the additional layering within a hardened container runtime.  C++ is a bit closer to the hardware, however, so it would perform a bit better.

I think the JPC team is a bit naive in their assessment that a hard disk image could be loaded from anywhere on the internet, so I would see this claim as an implied performance issue.  Operating systems and applications are designed such that the disk should exhibit latencies within a certain range, and when you try to make them an order of magnitude slower, unacceptable performance is a likely outcome.

With the way JPC has been architected, implementing an emulator for another processor architecture would be considerably easier than starting from scratch--a large number of components could be reused, as JPC abstracts most operations down to their logical equivalents.

With advances in virtualization technology, I can't see myself withstanding the inferior performance of an emulator, and I don't think it would be particularly useful to run any sort of modern operating system on a mobile device--with the current (improving) state of cell data networks, I think its much more useful to access  a remote machine through a VNC or RDP type connection from a mobile phone, than to try to emulate the entire x86 hardware stack and accompanying OS.

Wednesday, September 16, 2009

7 Layer Software Burrito

Layering is a pattern that must be applied judiciously.  Not that any pattern can be applied mindlessly, but layer specifically can cause major problems if not called for.

With the right circumstances, however, layering can help a system withstand the test of time.  As access to apps on mobile devices proliferate, layered applications will benefit greatly, as they may be able to change the top one or two layers in a stack to support the new UI and communications semantics, hopefully leaving a majority of the system untouched.

I can't recall if it was in one of our readings, but a quote stands out in my mind to the effect of "show me a team boundary, and i'll show you a software boundary", in that the organization of teams tends to naturally dictate the separation of software.  Layering works great for this, because you can divide up a project into a bunch of layers that *should* have boundaries, and exploit this behavior rather than suffer from it.  I think that this is an interesting consideration to make while designing a layered architecture.

Most applications have some form of natural layering, regardless of whether they use the formal definition thereof.  Refactoring an application to use an explicitly stated layered architecture can take on radically different forms.  In application with excellent separation of concerns and modularization, adding layering may be as simple as uncovering the communications pathways, and more strictly defining the interfaces involved.  In more poorly designed applications, where spaghetti code and dubious design practices are present, layering would probably best be left for the inevitable redesign/recode, though analysis of the existing code and lessons learned therein would surely shed light upon a likely layer decomposition.

Xen Garden

Before Xen, the best a virtualization platform had done was to scan the code of a running virtual guest, and perform a translation of instructions that were incompatible with the virtualization concept into ones that matched the paradigm.  As you can imagine, this introduces quite a bit of overhead.

Xen challenged this model by moving to a model of paravirtualization.  By forgoing "perfect compatibility", and modifying the guest OS's code to support an alternate set of virtualization-friendly instructions.  This takes the burden off of the hypervisor at run time, and also lets the developers make smarter choices than a one-size-fits-all instruction translator could do (especially when performance is already an issue--it would be counterproductive to run an expensive algorithm at runtime to optimize runtime performance!).

Xen's architecture was built on the premise of mutual distrust, and intended to challenge the notion that hardware-emulating type virtualization platforms had done before.

Xen uses "domains" to host guest operating systems, and Domain 0 is a special domain that handles semi-privileged tasks, removing them from the core of the hypervisor, and facilitating easier development and testing.

With processors supporting virtualization natively now, the big task was to start handling exceptions that indicate that instructions have been executed from an illegal context, and figure out how to execute them in a virtualization-friendly manner.  The fact that Xen is open source was invaluable in this case, because engineers from Intel and AMD, who had the most intimate knowledge of the specific processor's featureset, could directly contribute code.

The IOMMU extends the reach of hardware directly into the virtual environment by providing an abstraction layer through which access permission could be granted, while maintaining the integrity of the virtual environments.

Tuesday, September 15, 2009

Pipes & Filters

While I haven't done too much with the UNIX style pipes and filters, I have built a system that performs a series of data transformations and applies business logic in a fashion akin to a pipes and filters paradigm, though due to the technologies involved, it wasn't explicitly viewed as such.

In the same system, it would have been hard to strictly segregate based on a pipes and filters model due to the diversity of errors that can occur, and the need to handle them carefully, and respond in different ways to different compound conditions in the system.

As far as parallelization goes, I believe that the primary criteria for selecting the pipes and filters pattern would be the incremental nature of data processing and commonality of data representation.  If all data flows can be modeled such that the necessary input buffer is very small for the filter to perform work, and a common format can be agreed upon for all pipes that make the "glue logic" overhead minimal, there will likely be parallelization gains.

Active filters would be best in scenarios where large flows of data are likely to occur at irregular intervals, and parallelization gains are desired.  In these cases, having filters ready and able to process input when it becomes available makes good sense.  Passive filters would work better in a subsystem type environment, where a processing pipeline exists to process some type of request, but which is not the primary work of the system, and therefore not worth the active process overhead.

Awww...look at our little data, all grown up, and interacting with the real world!

In contrast with some other classmates/bloggers, I've only recently even signed up for Facebook....I've been a real social networking laggard, generally seeing most features as time-wasters, but I certainly can't argue with its popularity.  It has provided a leverageable platform for application developers to reuse and extend, and has provided a great benefit in facilitating reach to end users.

The architecture of Facebook is interesting to me because it has to deal with some fundamentally challenging issues of trust and privacy, as well as weaving these issues into the constantly changing landscape that is the Web, and I think that the Facebook engineers have done an admirable job.

Obviously, Facebook realized that they couldn't keep up with the diversity and pace of users' desires, and hence the 3rd party application system is the natural way to continue growing the business without having to bear the ongoing brunt of innovation.

The archictecture they put in place to support this is quite clever, and obviously has proven its mettle, as can be witnessed by the abundance of 3rd party content/providers, and users consuming that content.

FQL supported this both by bringing a more familiar "query" paradigm to the API calls, and also by exploiting operation batching to some degree.

The 3rd party application model is an interesting and challenging one, because typically applications are trusted with the content they must display, but in this situation its not the case--similar to asking a taxi driver to drive you to a secret location by blindfolding them! Facebook made this possible, however, by changing the model of data access.  With FBML, the application didn't so much process the data, as declare what data should be processed and presented--essentially offloading some of the application's runtime into the Facebook environment.

As the power of JavaScript has grown, its ever growing necessity is inevitable in the 3rd party application environment.  Instead of taking an Apple iPhone approach, and reviewing/inspecting and then approving or denying entry, Facebook made the model much more open.  Instead of jumping through bureaucratic hoops of a review process, developers must jump through technical hoops, by structuring their browser code within a restricted framework.  For a site like Facebook, I think this was a smart choice, because of the rapid pace of such content.

Facebook's architecture gives us a glimpse into one possible future of how networked applications are built.  Whereas traditionally the application/developer has been given a great deal of free reign over all pertinent data in a process, Facebook's architecture makes us rethink the validity and sustainability of this paradigm--if applications/developers have access to all data they process, how hesitant are we going to be to provide access to this data, thereby limiting future growth?  If, on the other hand, a method of computation and composition can be extended, such as the Facebook architecture, wherein sensitive data is kept within tightly controlled confines, the future of such applications is certainly bright!

Thursday, September 10, 2009

Christopher Alexander & Patterns

I thought these chapters were very interesting!  Over the past few days I've been looking over some of my wife's architecture books (she's an Interior Design student), and I can see more and more parallels between the two.  Initially, I had seen these parallels more in regards to building large public buildings, and the various disciplines, stakeholders, and the various forces involved.  Christopher Alexander has made me see how architecture and patterns relate to a much wider range of situations.

A pattern language, either in software or in physical architecture, provides a mechanism for describing a facet of design, it's purpose, uses, strengths, motivations, applications, and implementation.  Pattern languages help break down tasks (once again, in either physical or software architecture), and provide templated building blocks, which which the designer can go about building a system/structure with proven quality attributes.  Alexander stresses that these pattern languages can help designers match context, function, and form of various facets to the overall whole, and provides a framework for organic growth, wherein a pattern doesn't provide a restriction, but rather a pathway of inquiry.

I think that physical architecture patterns differ from software patterns in that they are more immutable than software patterns.  Whereas physical architecture is subject to the laws of physics--statics, optics, etc., software is subjected to ever changing set of fundamental paradigms.  Though ideas come in and out of vogue in both fields, I think that software patterns have to be more flexible and complex, because even though the author talks about buildings, towns, etc., coming to being through organic processes, there is still a relatively static end product--you may build a house based on patterns, but you're not that likely to start shuffling the windows around once it's built.  Software, on the other hand, is always changing, as requirements evolve, and the business environment changes, software is expected to evolve gracefully to meet such needs.  This differs from the evolution of a town in that whereas in a town, you may build new things to meet current needs while being contextually relevant, you don't undertake as sweeping changes with a town as you might with a software system--you don't talk about lifting up a whole town, and placing it on a more modern sub-grade--whereas with software, upgrading the platform and paradigms is an ongoing challenge and requirement.

I think that (physical) architectural patterns are timeless to the extent that physics and basic human nature is immutable.  Some patterns which relate to the current state of technology or lifestyle (such as barns), not so much.  I think that even in software patterns, there are some elements of timelessness, perhaps not in the exact wording and digital world-view that was prevalent at the time of writing, but in the fundamental decomposition of ideas and intentions, certain aspects of software patterns, too, will live forever.

RESTing up for Ch5

REST to me seems like a mapping of a relational database to the world of the web. Only 4 verbs?  That's fine...SQL's done just fine with SELECT, INSERT, UPDATE, and DELETE for years.  REST implies that you can layer on an arbitrary level of complexity in the type of query logic you can express, so I don't see it being restrictive there...

Really, aren't all systems "data" oriented?  Whether you consider "data" to be the state of your object graph, or fields in a database table, the purpose of software in general is to manage data.  Sometimes it's de-emphasized and subordinated to the workflows or behaviors that a system presents, but thats just semantics.

While I can see that it could be useful to always go to the same URL for the same data, I'm not sure how well this would work for a lot of compound querying.  It goes back to invoking behaviors, but at some point, the system is going to exhibit behaviors, and they have to be invoked somehow, so whether you consider it to be a type of query on one of the types of data involved, or a separate contract, there are still going to be situations where you need to do something that involves a bunch of different types of data, and return a bunch of different types of data.  Now where should we implement such an operation?  I don't think that REST makes this very clear.

In fact, I think that what REST tries to do has been a common theme throughout the evolution of the software world.  Someone will come up with a system that represents something well, and then through time, extensions will be added to it until it can do everything that everyone wants, by which point everyone considers it bloated, and starts looking towards a "simpler" format to accomplish the current view of what the core functions are, but in the process, they abandon a lot of power that has been built into previous systems/formats.  I think REST falls into this same trap.  Sure, it makes the idea of finding the same data in the same place, and offers some benefits such as architectural memoization, but falls short at specifying how query/selection logic, as well as behavior invocation should be handled.  So the next step is that someone will develop some extension that standardizes that, and we'll be further away from the utopian view of what REST should be, and not a whole lot better off than we were with web services.

Don't get me wrong, each new such technology leverages advances throughout computing, and the evolved system is usually better than its predecessors, but I don't know why each such advancement has to be a paradigm shift.  Why can't we just go back and revise existing standards and trim the fat and extend towards the future?

This chapter emphasizes "not being tied to certain infrastructure", but the reality of things is that you probably will be tied to quite a bit.  You'll end up building your system on a set of libraries and frameworks, and with certain methodologies which work nicely with your current technology stack, but if you were to want to make  a serious infrastructure change, it'll be no easy task no matter what.  Wasn't the purpose of the IP protocol so that different systems with different physical architectures, scales, and purposes could all talk to one another in a common method?  In the same way that IP has grown into a bunch of different vendor specific formats, etc. such shall REST, and as most things, I doubt it will live up to the hype that it has presented.

From a pragmatic view, it's just another different perspective on data, and one more layer of indirection in the world to resolve resources.

Tuesday, September 8, 2009

BA Ch 4 & ArchJava

Both the BA Ch4 and the ArchJava paper relate to a quite elusive concept (at least to me) in the modern world of software, which perhaps is just an embodiment of software architecture in general, but whose specific instances really hit home for me: How do you take a set of (potentially) idiomatically distinct libraries, frameworks, and ideologies, and merge them with the ideal view of how software should fit together?

Disclaimer: I can't speak from a Java perspective, as I somehow managed to get my whole BS in CS without writing a single line of Java, so I'm going to have to use .NET technologies as a close substitute.

ArchJava approaches this question from the standpoint of language enhancements on top of Java, where they ask for any code you write to be reworked within a paradigm of a component hierarchy, ports, and connections.  One big problem I have with this, and with many idealistic patterns and practices, is that they often times fail to recognize and leverage the existing infrastructure that constituent software brings to the table.  For example, using something like ArchJava, writing an ASP.NET page, which already has its own concept of component (or control) hierarchies and communications strategies, I'm not quite sure where we're left with an ArchJava type solution.  You might say that we'll consider the whole ASP.NET subsystem to be excluded from the ArchJava-type code model, but then as soon as you try to perform extension in a few different ways (such as control subclassing or HTTP pipeline processing addins), you break that scope, and it seems that only the much less strict, and less powerful mode of ArchJava can be used.

The chapter "Architecting for Scale" grapples with this question, though it doesn't provide extensive detail on a proposed ideal solution to the extent that ArchJava does.  I found the Form/Binding paradigm to be very interesting, in that it goes a long way to leverage existing technologies (such as SWING), but generates a wrapper/abstraction that allows the developers to work within the application's paradigm. The buy vs. build decisions also play into my question, as the "roll your own" is guaranteed to be much closer to the ideal architectural style, though potentially less robust, and often more expensive.

One thing that, being from the .NET world, I see as paramount, is the design time support for these tools and paradigms.  In ArchJava, for instance, the proposed solution supplants the standard compiler for an  enhanced version.  This is all well and good, and can easily integrate with a build flow, but at design time, the developer is left with constructs which, while they may compile with the extended compiler, are not (necessarily) intrinsically supported by things like code coloring, refactoring tools, intellisense, and perhaps code analysis tools.  In my opinion, these shortcomings make this type of tool more bleeding-edge and disruptive--perhaps to a greater degree than the benefit they provide.

Thursday, September 3, 2009

Beautiful(?) Darkstar (BA Ch 3)

The intent behind Project Darkstar has great merits--anywhere you can bring code reuse to make building software easier, cheaper, faster, and more stable is a worthwhile endeavor.  The architecture laid forth provides a good framework for developers to build from, but I believe that it falls short of its intention.  Wheras it was designed to be a "black box" type platform, where developers wouldn't have to understand its inner workings, it really only succeeds as a starting codebase.  It's well known that you can't (in the general case) take a standard algorithm and run it in a distributed or concurrent fashion (as the team found out), and as such, it misses the mark for its overall intention.  In the example where a team had build a data model on Darkstar with a "coordinator" object, it's clear that there was some other useful paradigm in play.  Perhaps the Darkstar team, instead of coaching the game team on how to make their game work better on Darkstar, should have looked at the reason for using such an object, and performed some analysis to see if they could build a Darkstar-compatible base class or method library, such that the poorly performant piece could be reworked to function in the general case, and provide some additional utility to game developers.

If I were on this team from the beginning, my biggest concern would be with potential performance.  I would advocate establishing benchmarks from other games in the industry--even if I couldn't know the workings of what other game engines were doing--but at least to quantify what response times are expected for what types of operations.  From there, I would have made incremental performance tests under various load conditions as part of the standard automated testing suite, such that performance bottlenecks could be identified as early as possible after implementation.

Thursday, August 27, 2009

Let Me Introduce Myself.../Beautiful Architecture CH1

Hello Everyone,

Let me first introduce myself.  My name is Dan Orchard, and I'm an on-campus student, living in Urbana.  I did my undergrad in CS from this fine university, and graduated in May of 2007.  I'm currently in the Professional MBA program, class of 2010, and I'm taking this course as my elective for that degree.  I also work full time (the Professional MBA program is designed for those who do).  I'm the SVP of Software Engineering for my firm, PosTrack Techologies (www.postrack.net), in charge of a small software development team in Joliet, IL.

The software engineering courses that I took as an undergrad are among my favorite, and I expect to continue that trend with this course.  In this day and age, meeting the functional requirements for a new application is easy.  Continuing to do so, while maintaining acceptable levels of other (nonfunctional) quality metrics is not so easy.

Currently being a business student, the focus of my studies generally follow the themes of creating value for customers, cutting costs through efficiency, creating competitive advantage for the firm, being agile to changes in the business environment, etc., and viewing these goals superficially, it can be easy to take the shortest path to features/release (the purely functional requirements), and gloss over the elements that support the long term success of the project/company.  If you ask anyone on my development team, they'll tell you that I'm constantly talking about software architecture, and how we can leverage structure to provide longevity, quality, and performance in our applications, and I know that this class will help me take my skills to the next level.

The job of the software architect is a difficult one, challenging even the most experienced architects with constantly changing technological and business environments, perspectives, goals, and the arbitrative role that the software architect must play between stakeholders with conflicting demands.  Try telling a building architect that they must build a structure with materials whose physical properties are debatable, building site subject to change, climate undetermined, and intended use open to interpretation, and they would probably laugh you out of their office--yet that is the job that software architects are charged with pursuing every day.  Even musical composition is subject to far fewer variables than software.  Hence, I believe that analogies of software architecture to other forms only function when describing its role to a lay-person, and that attempts to extend the analogy to the tasks at hand fall on their face.

In an age when time-to-market is king, and what was difficult-to-build, cutting edge functionality a year ago is taken for granted and simply expected today, the right type and amount of architecture is critical to success.

--Dan

Tuesday, August 25, 2009