Suspending Rakudo support for Parrot

At FOSDEM 2015, Larry announced that there will likely be a Perl 6 release candidate in 2015, possibly around the September timeframe. What we’re aiming for is concurrent publication of a language specification that has been implemented and tested in at least one usable compilation environment — i.e., Rakudo Perl 6.

So, for the rest of 2015, we can expect the Rakudo development team to be highly focused on doing only those things needed to prepare for the Perl 6 release later in the year. And, from previous planning and discussion, we know that there are three major areas that need work prior to release: the Great List Refactor (GLR), Native Shaped Arrays (NSA), and Normalization Form Grapheme (NFG).

…which brings us to Parrot. Each of the above items is made significantly more complicated by Rakudo’s ongoing support for Parrot, either because Parrot lacks key features needed for implementation (NSA, NFG) or because a lot of special-case code is being used to maintain adequate performance (lists and GLR).

At present most of the current userbase has switched over to MoarVM as the backend, for a multitude of reasons. And more importantly, there currently aren’t any Rakudo or NQP developers on hand that are eager to tackle these problems for Parrot.

In order to better focus our limited resources on the tasks needed for a Perl 6 language release later in the year, we’re expecting to suspend Rakudo’s support for the Parrot backend sometime shortly after the 2015.02 release.

Unfortunately the changes that need to be made, especially for the GLR, make it impractical to simply leave existing Parrot support in place and have it continue to work at a “degraded” level. Many of the underlying assumptions will be changing. It will instead be more effective to (re)build the new systems without Parrot support and then re-establish Parrot as if it is a new backend VM for Rakudo, following the techniques that were used to create JVM, MoarVM, and other backends for Rakudo.

NQP will continue to support Parrot as before; none of the Rakudo refactorings require any changes to NQP.

If there are people that want to work on refactoring Rakudo’s support for Parrot so that it’s more consistent with the other VMs, we can certainly point them in the right direction. For the GLR this will mainly consists of migrating parrot-specific code from Rakudo into NQP’s APIs. For the NSA and NFG work, it will involve developing a lot of new code and feature capabilities that Parrot doesn’t possess.

Posted in Uncategorized | 12 Comments

APW2014 and the Rakudo Great List Refactor

This past weekend I attended the 2014 Austrian Perl Workshop and Hackathon in Salzburg, which turned out to be an excellent way for me to catch up on recent changes to Perl 6 and Rakudo. I also wanted to participate directly in discussions about the Great List Refactor, which has been a longstanding topic in Rakudo development.

What exactly is the “Great List Refactor” (GLR)? For several years Rakudo developers and users have identified a number of problems with the existing implementation of list types — most notably performance. But we’ve also observed the need for user-facing changes in the design, especially in generating and flattening lists.  So the term GLR now encompasses all of the list-related changes that seem to want to be made.

It’s a significant (“great”) refactor because our past experience has shown that small changes in the list implementation often have far-reaching effects. Almost any bit of rework of list fundamentals requires a fairly significant refactor throughout much of the codebase. This is because lists are so fundamental to how Perl 6 works internally, just like the object model. So, as the number of things that are desirable to fix or change has grown, so has the estimated size of the GLR effort, and the need to try to achieve it “all at once” rather than piecemeal.

The pressure to make progress on the GLR has been steadily increasing, and APW2014 was significant in that a lot of the key people needed for that would be in the same location. Everyone I’ve talked to agrees that APW2014 was a smashing success, and I believe that we’ve now resolved most of the remaining GLR design issues. The rest of this post will describe that.

This is an appropriate moment to recognize and thank the people behind the APW effort. The organizers did a great job.  The Techno-Z and ncm.at venues were fantastic locations for our meetings and discussions, and I especially thank ncm.at, Techno-Z, yesterdigital, and vienna.pm for their generous support in providing venues and food at the event.

So, here’s my summary of GLR issues where we were able to reach significant progress and consensus.

You are now leaving flatland

(Be sure to visit our gift shop!)

Much of the GLR discussion at APW2014 concerned flattening list context in Perl 6. Over the past few months and years Perl 6 has slowly but steadily reduced the number of functions and operators that flatten by default. In fact, a very recent (and profound) change occurred within the last couple of months, when the .[] subscript operator for Parcels switched from flattening to non-flattening. To illustrate the difference, the expression

(10,(11,12,13),(14,15)).[2]

previously would flatten out the elements to return 12, but now no longer flattens and produces (14,15). As a related consequence, .elems no longer flattens either, changing from 6 to 3.

Unfortunately, this change created a inconsistency between Parcels and Lists, because .[] and .elems on Lists continued to flatten. Since programmers often don’t know (or care) when they’re working with a Parcel or a List, the inconsistency was becoming a significant pain point. Other inconsistencies were increasing as well: some methods like .sort, .pick, and .roll have become non-flattening, while other methods like .map, .grep, and .max continue to flatten. There’s been no really good guideline to know or decide which should do which.

Flattening behavior is great when you want it, which is a lot of the time.  After all, that’s what Perl 5 does, and it’s a pretty popular language. But once a list is flattened it’s hard to get the original structure if you wanted that — flattening discards information.

So, after many animated discussions, review of lots of code snippets, and seeking some level of consistency, the consensus on Perl 6 flattening behavior seems to be:

  • List assignment and the [ ] array constructor are unchanged; they continue to flatten their input elements. (Arrays are naturally flat.)
  • The for statement is unchanged. for @a,@b { ... } flattens @a,@b and applies the block to each element of @a followed by each element of @b. Note that flattening can easily be suppressed by itemization, thus for @a, $@b { ... } flattens @a but does all of @b in a single iteration.
  • Method calls tend to not flatten their invocant. This most impacts .map, .grep, and .first… the programmer will have to use .flat.grep and .flat.first to flatten the list invocant.  Notably, .map will no longer flatten its invocant — a significant change — but we’re introducing .for as a shortcut for .flat.map to preserve a direct isomorphism with the for statement.There’s ongoing conjecture of creating an operator or syntax for flattening, likely a postfix of some sort, so that something like .|grep would be a convenient alternative to .flat.grep, but it doesn’t appear that decision needs to be made as part of the GLR itself.
  • Argument lists continue to depend on the context in which they are bound: flattening for slurpy parameters, top-level itemizing for slice parameters, and non-flattening (or deferred flattening) for Positionals.
  • The above two points produce a general guideline that method call invocants are generally not-flattened, while function call arguments are more likely to be.
    ((1,2), 3, (4,5)).map({...}) # iterates over three elements
    map {...}, ((1,2),3,(4,5))   # iterates over five elements
    
    (@a, @b, @c).pick(1)         # picks one of three arrays
    pick 1, @a, @b, @c           # flatten arrays and pick one element
    
  • We think it will be very difficult to have a guideline that applies 100% of the time — there will be a few exceptions to the rule but they should generally feel natural.
  • The flattening behavior of operators continues to be specific to each operator — some will flatten, others will not. Fortunately, any flattening behavior should be grouped by precdence level, is generally dwimmy, and there are easy ways to use contextualizers to quickly switch to the behavior you want.

United Parcel Severance

As a result of improvements in flattening consistency and behavior, it appears that we can eliminate the Parcel type altogether. There was almost unanimous agreement and enthusiasm at this notion, as having both the Parcel and List types is quite confusing.

Parcel was originally conceived for Perl 6 as a “hidden type” that programmers would rarely encounter, but it didn’t work out that way in practice. It’s nice that we may be able to hide it again — by eliminating it altogether. 🙂

Thus infix:<,> will now create Lists directly. It’s likely that comma-Lists will be immutable, at least in the initial implementation. Later we may relax that restriction, although immutability also provides some optimization benefits, and Jonathan points out that may help to implement fixed-size Arrays.

Speaking of optimization, eliminating Parcel may be a big boost to performance, since Rakudo currently does a fair bit of converting Parcels to Lists and vice-versa, much of which goes away if everything is a List.

A few more times around the (loop) blocks

During a dinner discussion Jonathan reminded me that Synopsis 4 has all of the looping constructs as list generators, but Rakudo really only implements for at the moment. He also pointed out that if the loop generators are implemented, many functions that currently use gather/take could potentially use a loop instead, and this could be much more performant. After thinking on it a bit, I think Jonathan is on to something. For example, the code for IO::Handle.lines() currently does something like:

gather {
    until not $!PIO.eof {
        $!ins = $!ins + 1;
        take self.get;
    }
 }

With a lazy while generator, it could be written as

(while not $!PIO.eof { $!ins++; self.get });

This is lazily processed, but doesn’t involve any of the exception or continuation handling that gather/take requires. And since while might choose to not be strictly lazy, but lines() definitely should be, we may also use the lazy statement prefix:

lazy while not $!PIO.eof { $!ins++; self.get };

The lazy prefix tells the list returned from the while that it’s to generate as lazily as it possibly can, only returning the minimum number of elements needed to satisfy each request.

So as part of the GLR, we’ll implement the lazy list forms of all of the looping constructs (for, while, until, repeat, loop). In the process I also plan to unify them under a single LoopIter type, which can avoid repetition and be heavily optimized.

This new loop iterator pattern should also make it possible to improve performance of for statements when performed in sink context. Currently for statements always generate calls to .map, passing the body of the loop as a closure. But in sink context the block of a for statement could potentially be inlined. This is the way blocks in most other loops are currently generated. Inlining the block of the body could greatly increase performance of for loops in sink context (which are quite common).

Many people are aware of the problem that constructs such as for and map aren’t “consuming” their input during processing. In other words, if you’re doing .map on a temporary list containing a million elements, the entire list stays around until all have been processed, which could eat up a lot of memory.

Naive solutions to this problem just don’t work — they carry lots of nasty side effects related to binding that led us to design immutable Iterators. We reviewed a few of them at the hackathon, and came back to the immutable Iterator we have now as the correct one. Part of the problem is that the current implementation is a little “leaky”, so that references to temporary objects hang around longer than we’d like and these keep the “processed” elements alive. The new implementation will plug some of the leaks, and then some judicious management of temporaries ought to take care of the rest.

I’ve got a sinking feeling…

In the past year much work has been done to improve sink context to Rakudo, but I’ve never felt the implementation we have now is what we really want. For one, the current approach bloats the codegen by adding a call to .sink after every sink-context statement (i.e., most of them). Also, this only handles sink for the object returned by a Routine — the Routine itself has no way of knowing it’s being called in sink context such that it could optimize what it produces (and not bother to calculate or return a result).

We’d really like each Routine to know when it’s being called in sink context.  Perl 5 folks will instantly say “Hey, that’s wantarray!”, which we long ago determined isn’t generally feasible in Perl 6.

However, although a generalized wantarray is still out of reach, we can provide it for the limited case of detecting sink contexts that we’re generating now, since those are all statically determined. This means a Routine can check if it’s been called in sink context, and use that to select a different codepath or result.  Jonathan speculates that the mechanism will be a flag in the callsite, and I further speculate the Routine will have a macro-like keyword to check that flag.

Even with detecting context, we still want any objects returned by a Routine to have .sink invoked on them.  Instead of generating code for this after each sink-level statement, we can do it as part of the general return handler for Routines; a Routine in sink context invokes .sink on the object it would’ve otherwise returned to the caller.  This directly leads to other potential optimizations:  we can avoid .sink on some objects altogether by checking their type, and the return handler probably doesn’t need to do any decontainerizing on the return value.

As happy as I am to have discovered this way to pass sink context down into Routines, please don’t take this as opening an easy path to lots of other wantarray-like capabilities in Perl 6. There may be others, and we can look for them, but I believe sink context’s static nature (as well as the fact that a false negative generally isn’t harmful) makes it quite a special case.

The value of consistency

One area that has always been ambiguous in the Synopses is determining when various contextualizing methods must return a copy or are allowed to return self. For example, if I invoke .values on a List object, can I just return self, or must I return a clone that can be modified without affecting the original? What about .list and .flat on an already-flattened list?

The ultra-safe answer here is probably to always return a copy… but that can leave us with a lot of (intermediate) copies being made and lying around. Always returning self leads to unwanted action-at-a-distance bugs.

After discussion with Larry and Jonathan, I’ve decided that true contextualizers like .list and .flat are allowed to return self, but other method are generally obligated to return an independent object.  This seems to work well for all of the methods I’ve considered thus far, and may be a general pattern that extends to contextualizers outside of the GLR.

Now it’s just a SMOPAD

(small matter of programming and documentation)

The synopses — especially Synopsis 7 — have always been problematic in describing how lists work in Perl 6. The details given for lists have often been conjectural ideas that quickly prove to epic fail in practice. The last major list implementation was done in Summer 2010, and Synopsis 7 was supposed to be updated to reflect this design. However, the ongoing inconsistencies (that have led to the GLR) really precluded any meaningful update to the synopses.

With the progress recently made at APW2014, I’m really comfortable about where the Great List Refactor is leading us. It won’t be a trivial effort; there will be significant rewrite and refactor of the current Rakudo codebase, most of which will have to be done in a branch. And of course we’ll have to do a lot of testing, not only of the Perl 6 test suite but also the impact on the module ecosystem. But now that much of the hard decisions have been made, we have a roadmap that I hope will enable most of the GLR to be complete and documented in the synopses by Thanksgiving 2014.

Stay tuned.

Posted in perl6, rakudo | 30 Comments

A Rakudo Performance

At YAPC::NA 2012 in Madison, WI I gave a lightning talk about basic improvements in Rakudo’s performance over the past couple of years.  Earlier today the video of the lightning talks session appeared on YouTube; I’ve clipped out my talk from the session into a separate video below.  Enjoy!

 

Posted in perl6, rakudo | 5 Comments

Roborama 2012a

A couple of weeks ago I entered the Dallas Personal Robotics Group Roborama 2012a competition, and managed to come away with first place in the RoboColumbus event and Line Following event (Senior Level).  For my robot I used one of the LEGO Mindstorms sets that we’ve been acquiring for use by our First Lego League team, along with various 3rd party sensors.

The goal of the RoboColumbus event was to build a robot that could navigate from a starting point to an ending point placed as far apart as possible; robots are scored on distance to the target when the robot stops.  If multiple robots touch the finish marker (i.e., distance zero), then the time needed to complete the course determines the rankings.   This year’s event was in a long hall with the target marked by an orange traffic cone.

HiTechnic IR ball and IRSeeker

HiTechnic IR ball and IRSeeker sensor

Contestants are allowed to make minor modifications to the course to aid navigation, so I equipped my robot with a HiTechnic IRSeeker sensor and put an infrared (IR) electronic ball on top of the traffic cone.  The IRSeeker sensor reports the relative direction to the ball (in multiples of 30 degrees), so the robot simply traveled forward until the sensor picked up the IR signal, then used the IR to home in on the traffic cone.  You can see the results of the winning run in the video below, especially around the 0:33 mark when the robot makes its first significant IR correction:

http://youtu.be/x1GvpYAArfY

My first two runs of RoboColumbus didn’t do nearly as well; the robot kept curving to the right for a variety of reasons, and so it never got a lock on the IR ball.  Some quick program changes at the contest and adjustments to the starting direction finally made for the winning run.

For the Line Following contest, the course consisted of white vinyl tiles with electrical tape in various patterns, including line gaps and sharp angles.  I used a LineLeader sensor from mindsensors.com for basic line following, with some heuristics for handling the gap conditions.  The robot performed fine on my test tiles at home, but had difficulty with the “gap S curve” tiles used at the contest.  However, my robot was the only one that successfully navigated the right angle turns, so I still ended up with first place.  🙂

Matthew and Anthony from our FLL robotics team also won other events in the contest, and there are more videos and photos available.  The contest was a huge amount of fun and I’m already working on new robot designs for the next competition.

Many thanks to DPRG and the contest sponsors for putting on a great competition!

 

Posted in lego, robotics | Tagged , | 1 Comment

Oslo Perl 6 Patterns Hackathon, Days 1-2

For the past couple of days I’ve been in Oslo, Norway, attending the Perl 6 Patterns Hackathon sponsored by Oslo Perl Mongers, Jan Ingvoldstad IT, and NUUG Foundation. A lot of things are happening at the hackathon, as you’ll see below.

First, Oslo itself is every bit as nice as I remember from attending the Nordic Perl Workshop in 2009 (and another hackathon that took place then). And once again, the hackathon organizers (Jan, Karl, Salve) — have done an amazing job in making sure that all of us at this hackathon can remain productive at working on Perl 6 as well as having a good time while we’re here. The food, facilities, and hospitality have been outstanding.

moritz++ and jnthn++ have already blogged about their work thus far at the hackathon; here are a few other things that have taken place while we’re here:

* Friday morning I noticed in the latest Rakudo compiler release notes that autovivification of hashes and arrays still wasn’t fully implemented in the nom version of Rakudo. So, I spent a bit of time early Friday confirming with jnthn++ and masak++ about some autovivification edge cases, and then implemented the rest of what we need from autoviv. So, that’s an important feature restored.

* jnthn++ worked on getting the :i (:ignorecase) flag working for interpolated literals in regexes; I helped a bit with that, but in the process noticed just how bad things were for people who had Parrot compiled without ICU. There were lots of failing spec tests and problems with doing case-insensitive regexes matches, even for simple strings. The crux of the problem was that Parrot simply threw exceptions for case conversions of several Unicode encodings whenver ICU was present, even if the strings involved had only ASCII or Latin-1 characters.  I noticed that this problem affected several of the people attending the hackathon today (including jnthn++), so I decided it could not be allowed to live.  So, I added a patch to Parrot that enables more case conversions when ICU isn’t present, as long as all of the codepoints involved are in ASCII or Latin-1 (which the majority of them are). If ICU is present, Parrot continues to use ICU, but if ICU isn’t available, Parrot is at least able to handle case conversions for most of the strings we encounter.

* We had a lot of relative newcomers to Perl 6 today,  masak++ took some time to give them all an excellent tour of the Perl 6 universe.  Based on masak’s introduction, several of today’s attendees were able to quickly start contributing some very useful additions to Perl 6 and Rakudo.

* Marcus Ramberg vastly improved the “-h” option to the Rakudo executable, listing many more of the available and useful options. Then Marcus and tadzik++ fixed up the “–doc” option as well, which extracts documentation from the program code and displays it in a readable form.

* masak++ stumbled across a bug involving comparisons of Pair objects with uninitialized variables; we ultimately tracked it down to an issue of comparing things against +Inf and -Inf. A couple of short patches fixed that problem.

* Geir Amdal added some methods to IO to retrieve file stat times from the operating system. We had these method in the Beijing release but they had not yet been ported to nom — it’s good to have them back.

* Salve (sjn++) and several other hackers started a project of developing a much better set of reviewed examples for newcomers to examine. I pointed out the perl6-examples repository (which hasn’t had updates in quite a long time) and suggested they work on adopting/reorganizing it. At sjn’s suggestion, moritz++ added the push hooks so that commits to perl6-examples show up on the #perl6 channel, and throughout the day we were all treated to seeing improvements to the existing examples and hearing very useful comments about what the folks were seeing and experiencing there.

* sjn++ also asked about how one would determine the Rakudo version number from within a program; while that information has been somewhat available via $*PERL<version>; it wasn’t really in a useful form. So, late this evening I reworked the implementation of $*PERL somewhat so that it’s possible to determine the compiler, compiler version, compiler release number, and other information. moritz++ also at one point needed a way to determine the version of nqp being used to build Rakudo; I didn’t add it yet but will squeeze that in tomorrow. I’m not entirely happy with the way $*PERL is set up now; hopefully we can get some design and specification clarifications for it soon. At any rate, compiler version information is now available to programs to examine.

* On a related note, while reviewing version number information in Synopsis 2 I noticed that there’s a Version class we don’t yet implement — it doesn’t seem too hard to add so I may prototype one tomorrow.

* jnthn++ and I were able to spend some much needed time plotting out the next moves for the AST implementation, currently called QAST. QAST is part of the nqp implementation, and is the successor to PAST (part of the Parrot repository). Some of the refactors we’ll be able to make in QAST look like they will enable huge improvements in speed, readbility, and writability of compilers in NQP. (See jnthn++’s blog post for more details on QAST.)

There’s of course much more that happened, including many bug fixes and improvements, but those are some of the bigger items. I’m hoping to find some time tomorrow to chase down some largish bugs in Rakudo’s regular expression engine, to ease the pain further for others. I think we may also have a discussion about Rakudo’s List implementation and its features and next steps.

My thanks again to Salve, Jan, and Karl for organizing this hackathon — it has really enabled us to resolve some long standing issues and make good plans for the next phases of development.

Pm

Posted in Uncategorized | 1 Comment

FLL: Matching LEGO wheels

This last fall my wife and I sponsored and coached a FIRST LEGO League robotics team that competed in the North Texas FLL Regional Tournament. We all had a great time and learned a lot. In the process we also discovered many helpful tips and ideas, but some of them weren’t available on the web or were difficult to locate. I’ve decided to collect and publish some of the ideas here so that (1) we’ll remember them for next year and (2) others can possibly benefit.

One of the things we discovered is the importance of matching wheels when building the robot. Intuitively one expects all LEGO wheels of the same type to be exactly the same size (i.e., have the same circumference). We found the reality to be quite different; two otherwise identical-looking wheels can in fact have substantially different circumferences in use. If the wheel circumferences are different, it’s harder to get the robot to reliably go straight.
Continue reading

Posted in fll, lego, northtexasfll, robotics | Leave a comment

Some thoughts on YAPC::EU 2011

YAPC::EU 2011 in Riga has just about finished, and it has been great seeing long-time friends again and making new ones. I’ve heard many people remark that we really wish there could be more weeks like these.

There are two items that stand out in my mind about this year’s conference:

1. Andrew Shitov and his crew are absolutely amazing at organizing and running a conference. This was the most flawlessly executed conference or event I think I’ve ever been to. Not only that, but Andrew and the other organizers made it look effortless, which to me is a mark of true greatness. I’m certain that in fact there was a lot of planning and effort behind it, but the entire team just looked relaxed and at ease throughout the event. I’d definitely encourage folks to attend any event that Andrew and this group organizes.

2. Riga is a stunningly beautiful place. I definitely want to return here again some day, and I’m grateful that the organizers chose this location.

Pm

Posted in perl6, rakudo | 2 Comments

New regex engine for nqp and nom, now passing 7K spectests

Nom and nqp now have a new regular expression engine (currently known as “QRegex”) that I’ve implemented over the past week.

As progress continued on the new “nom” branch of Rakudo since my last posting, it was becoming increasingly evident that regular expression support would end up being the next major blocker. I think we were all expecting that nom would initially use the same regular expression engine that nqp (and nqp-rx) have traditionally used. However, as I starting working on this, it began to look as though the amount of effort and frustration involved would end up being almost as large as what would be needed to make a cleaner implementation up front, and would leave a quite messy result.

So, last week I started on designing and implementing a new engine. Today I’m happy to report that nom is now using the new QRegex engine for its pattern matching, and that making a new engine was undoubtedly a far better choice than trying to patch in the old one in an ugly manner.

So far only nom’s runtime is using the new regex engine; the nqp and rakudo parsers are still using the older (slow) one, so I don’t have a good estimate of the speed improvement yet. The new engine still needs protoregexes and a couple of other features before it can be used in the compilers, and I hope to complete that work in the next couple of days. Then we’ll have a good idea about the relative speed of the new engine.

I’m expecting QRegex to be substantially faster than the old one, for a variety of reasons. First, it should make far fewer method calls than the old version, and method calls in Parrot can definitely be slow. As an example I did some profiling of the old engine a couple of weeks ago, and the “!mark_fail” method accounted for something like 60% or more of the overall method calls needed to perform the parse.

Qregex does its backtracking and other core operations more directly, without any method calls for backtracking. So I expect that this one change will reduce the number of method calls involved in parsing by almost a factor of 3. Other common operations have also eliminated the method call overhead of the previous engine.

The new engine also uses a fixed-width encoding format internally, which means that we no longer pay a performance penalty for matching on unicode utf-8 strings. This will also enable us to eventually use the engine to do matching on bytes and graphemes as well as codepoints.

I also found quite a few places where I could drastically reduce the number of GCables being created. In some cases the old engine would end up creating multiple GCables for static constants, the new engine avoids this. A couple of new opcodes will enable QRegex to do substring comparisons without having to create new STRING gcables, which should also be a dramatic improvement.

I’ve already prototyped some code (not yet committed) that will integrate a parallel-NFA and longest-token-matching (LTM) into QRegex, so we’ll see even more speed improvement.

And did I mention the new engine is implemented in NQP instead of PIR? (Although it definitely has a lot of PIR influence in the code generation, simply by virtue of what it currently has to do to generate running code.)

Ultimately I’m expecting the improvements already put into QRegex to make it at least two to three times faster than its predecessor, and once the NFA an LTM improvements are in it ought to be even faster than that. And I’ve already noted new places ripe for optimizations… but I’m going to wait for some new profiles before doing too much there.

Another key feature of the new engine is that the core component is now a NQP role instead of a class. This means that it’s fairly trivial for any HLL to make use of the engine and have it produce match objects that are “native” to the HLL’s type system, instead of having to be wrapped. The wrapping of match objects in the old version of Rakudo was always a source of bugs and problems, that we can now avoid. Credit goes to Jonathan Worthington for 6model, which enables QRegex to do this, and indeed the ability to implement the engine using roles was what ultimately convinced me to go this route.

While I’ve been working on regexes, Moritz Lenz, Will Coleda, Tadeusz Sośnierz, Solomon Foster, and others have continued to add features to enable nom to pass more of the spectest suite. As of this writing nom is at 244 test files and 7,047 tests… and that’s before we re-enable those tests that needed regex support. The addition of regexes to nom should unblock even more tests and features.

Some of the features added to nom since my previous post on July 2:
* Regexes
* Smart matching of lists, and other list/hash methods and functions
* Fixes to BEGIN handling and lexicals
* Implementation of nextsame, callsame, nextwith, callwith
* More introspection features
* Methods for object creation (.new, .bless, .BUILD, etc.)
* ‘is rw’ and return value type checking traits on routines
* Auto-generation of proto subs
* Junctions
* Backtraces

We’ve also done some detailed planning for releases that will transition Rakudo and Rakudo Star from the old compiler to the new one; I’ll be writing those plans up in another post in the next day or two.

Pm

Posted in nqp, perl6, rakudo | 4 Comments

More nom features and spectests, still 5x faster than master

Progress continues on the nom branch of Rakudo. As of this writing we’re up to 89 spectest files and over 1000 passing spectests, which is a good improvement from just five days ago.

We continue to see that nom performs much better than the previous version of Rakudo. Moritz Lenz added enough features to be able to run the mandelbrot fractal generator under nom, so we can compare speeds there. Under master a 201×201 set took 16 minutes 14 seconds to run, in nom it “naively” took 4.5 minutes, and with some further optimizations Moritz has it running in 3 minutes, for a factor five improvement over the existing master branch. And there are still many more compiler-level optimizations that remain to be worked on.

In the past couple of days I added the metaoperators back into nom. Furthermore, the new implementation is far more correct — metaoperators such as &infix:<X> and &infix:<Z> can now handle multiple list arguments instead of just two as in master. We still haven’t added back the hyperoperators; I plan to do that in the next couple of days.

Jonathan has been attending the Beijing Perl Workshop this week; his presentation slides are now available at http://jnthn.net/articles.shtml. Videos may be available soon. Even with his travels, Jonathan has continued to implement some of the needed lexical and role support in nom, so that we’re generally unblocked in making needed progress in the branch.

Carl Masak wrote a post introducing the Perl 6 type system; after reading an early draft of his post we discovered that several of the builtin types (Code, Attribute, Signature, Parameter) have been mistakenly implemented as subclasses of Cool. We’ve now fixed this in nom; we may or may not fix it in master.

Indeed, we’re already starting to phase out the master branch altogether. Yesterday I made a commit to master that effectively freezes it to always test against a specific revision of the spectests. This means we’re free to fudge and adapt the tests to the needs of the nom branch without concern for what it might do to testing in the master branch.

Speaking of tests, Moritz gave me some useful shortcut links for viewing different reports in our RT ticket queue. I’ve now set up a page on rakudo.org at http://rakudo.org/tickets/ where we can collect these report links and describe how the ticket queue works. One of the more useful links is http://rakudo.org/rt/testneeded ; this link shows a list of tickets that can be closed as soon as someone is able to confirm (or add) an appropriate test in the spectests. Writing tests is fairly easy, if you’re interested in helping with Perl 6 development, this can be a good place to start.

Here’s a summary list of features added to the nom branch since my last posting (five days ago):

* Complex numbers and numeric operator fixes
* Complex numbers have two native nums instead of Num objects
* Rat literals
* List.pop, List.reverse
* Initial LoL (list of lists) implementation
* map and grep
* Many string methods and functions
* metaoperators: Rop, Zop, Xop, !op, op=, [op], [\op]
* infix:<===>, infix:<eqv>
* Hash and Array hold Mu values, scalars default to Mu constraint
* Fixes to Configure.pl and –gen-parrot=branch
* Proper handling of $_, $!, and $/
* Improved exception handling and reporting
* Hash and List slices, including autotrim on infinite indices

In the next few days I plan to have regexes working in nom, finish off the metaoperators, and improve string-to-number conversions (including radix conversions).

For people looking to learn some Perl 6, to help others with learning Perl 6, or to do a bit of both, Bruce Gray has started “Flutter” — a suite of “micro-demonstration screens” for Perl 6. Essentially each screen introduces or demonstrates a Perl 6 feature or concept. Flutter is still in the embryonic stage, so it could use both content and implementation improvements and I’m sure that patches and pull requests will be extremely welcome.

We can still use help with triaging spectests and other tasks, if you’re interested in hacking on code or otherwise helping out, email us or find us on IRC freenode/#perl6. We can also use help with adding useful links and developer information to rakudo.org, if you’re inclined to do some of that.

Posted in perl6, rakudo | 1 Comment

Lots of Rakudo-nom progress, starts to run spectests

The nom branch of Rakudo continues to develop at a blistering pace. Yesterday nom finally had a working Test.pm, which meant we could start testing it against the spectest (“roast”) suite. As of this writing nom is passing 50 spectest files. By way of comparison, the master branch passes 551 spectest files, so we’re already about 9% of the way there. And I expect that number to grow — many of the spectests fail because nom is missing relatively minor features that can be easily restored. At this rate, I’m thinking it’s very possible that the next monthly release of Rakudo (July) will be based on the nom branch instead of the old master branch.

I’ve also worked further on nom’s list implementation, and it’s now faster than lists and iteration in master. In fact, for loops in the nom branch now run about 80% faster than they did in the master branch.

We continue to eliminate PIR from the code base in nom. For the core setting, we’re down to 143 instances of ‘pir::’ and 22 instances of ‘Q:PIR’. The rest have been replaced by generic ‘nqp::’ opcodes that can someday be targeted to other virtual machine backends. Currently we’ve defined about 83 nqp:: opcodes that are used in implementing the core setting. For efficiency reasons we might not ever be able to eliminate all PIR from the core setting, but we should be able to get it to be small enough that it can be walled-off into VM-specific code files.

To give an idea of how fast things are moving — here’s a summary of the features that have been added to nom in the past seven days:

* fail()
* lexically scoped returns
* for-style loops and map, 80% faster than master
* better infinite lazy list handling
* gather/take
* try statements
* package-scoped variables, subs, and methods
* whatever currying
* Test.pm
* lots of builtin operators and methods
* dynamic variables, PROCESS and GLOBAL namespaces
* IO objects, including $*IN, $*OUT, $*ERR
* literal values in signatures
* quantified method dispatch (.?method, .+method, .*method)
* basic roles, including Associative, Positional, and Callable
* basic support for natively-typed lexicals (e.g., ‘int’, ‘str’, ‘num’)
* argument interpolation
* list assignment
* new say and .gist semantics
* magical string increment and decrement
* sequence operator
* series operator
* preliminary BEGIN/CHECK/INIT/END phasers
* smart matching (~~)
* inlined assignment

So, you can see things are active. We’re also in need of testers and people who can help us triage spectests and figure out what is causing them to not run. If you’re interested in hacking on code or helping with the tests — email us or find us on IRC freenode/#perl6!

Posted in perl6, rakudo | 1 Comment