Showing posts with label php. Show all posts
Showing posts with label php. Show all posts

Friday, January 24, 2014

Repost: Two Leg OAuth Authentication For Layar in PHP

This isn't my content, but the original by mobius was offline. Here's a copy from the waybackmachine.

Well it took me like few hours to get this to work, so I am sharing my solution in case anyone gets in the same place I was.
Initially, I should tell you that I tried php-oauth (http://code.google.com/p/oauth-php/) which is probably the most complete library I found for PHP, but too complicated for what I wanted to do.
I also tried the PECL extension of PHP, (http://pecl.php.net/package/oauth) in which, in version 1.0.0 I was unable to get OAuthProvider to perform a Two Leg Authentication. (I think there might be a bug having to do with passing callback functions as arrays – part of the class)
So eventually I found another OAuth library (http://oauth.googlecode.com/svn/code/php/OAuth.php) that I could get a super striped down server to actually work (http://gist.github.com/360872)
Long story short, use this code in your layer to authenticate layar service:
require_once 'OAuth.php';
 
$key = 'xxxxxxxx';        // Set this accordingly both here and to the Layar layer configuration
$secret = 'xxxxxxxxx';
 
$consumer = new OAuthConsumer($key, $secret);
$signature = new OAuthSignatureMethod_HMAC_SHA1();
$request = new OAuthRequest( $_SERVER['REQUEST_METHOD'], 'http://' . $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI'] );
 
if( !($valid = $signature->check_signature( $request, $consumer, null, $_REQUEST['oauth_signature'])) ) {
      exit;
}
As mentioned by Rasmus on his really good tutorial on PECL OAuth extension (http://toys.lerdorf.com/archives/55-Writing-an-OAuth-Provider-Service.html) a nice and pretty secure way to generate the secret/key pair for OAuth to use could be the following snippet:

 
$fp = fopen('/dev/urandom','rb');
$entropy = fread($fp, 32);
fclose($fp);
 
// in case /dev/urandom is reusing entropy from its pool, let's add a bit more entropy
$entropy .= uniqid(mt_rand(), true);
$hash = sha1($entropy);  // sha1 gives us a 40-byte hash
 
// The first 30 bytes should be plenty for the consumer_key
// We use the last 10 for the shared secret
 
print_r(array(substr($hash,0,30),substr($hash,30,10)));
I just wish someone at Layar mentioned somewhere that this is a “Two-Leg” authentication for us that were not familiar with OAuth. It would have saved me a lot of time searching for the right answer :)

Monday, September 30, 2013

Dog of a week... and it's only monday?


Feeling a bit overwhelmed already
  1. I don't know why I have to argue about DI, but I find I do. This neatly gets away from fat controllers, fat models and finally the the stuff that matters.
  2. I pulled together a quick ABN input on the weekend. I figured I've built the thing enough times to do it right, and other Australian web devs might have the same underlying issue.
  3. Many, many pull requests this week, contributing to Fat Free CRM, Chef recipes and even my much unloved fork of tipsy.
  4. All of Adelaide, most of Melbourne, Tasmania, Brisbane and Sydney have had a lot of attention from keepright.ipax.at - Adelaide has the cleanest map data of the lot, but Melbourne isn't far behind.
    My main focus: unclosed ways, redaction errors, sport tags without a physical associated tag - mostly things that won't show up in the rendering.
  5. Took the opportunity to work from home today, so spent my lunch break constructing gravel path.

Monday, May 13, 2013

Consuming GeoRSS and querying OpenStreetMap

In Adelaide, there was an unexpected fire. It was a bushfire, and I'd just been mapping the area earlier that day - ironically, tracing some of the forests which were being burnt out at that very point in time.

I decided to go and see what was available for fire alerts around the nation.

It turns out the majority of fire authorities publish GeoRSS. I took the feeds and aggregated them, plus put in a transformation of the data to JSON.

Using Leaflet, I added a rendering of particular fires.

Using the OpenStreetMap Overpass API, I queried for "all areas in a 500m bounding box", and again using Leaflet, rendered those polygons onto a map.

Here's the Project:
https://github.com/CloCkWeRX/burning-down

Here's a few screenshots:

Image
Coverage, for most of Australia. WA doesn't provide openly licensed data, and South Australia's does not contain lat/long info.
The few you see in those states are from Sentinel, a satellite based system for detecting 'hot spots' that may indicate a fire.

Image
Here, a number of resident buildings are marked as 'at risk' from a fire. In reality, these are probably safe, due to fires in these areas tending to be contained to residential areas.
On the other hand, you can certainly consider adding an Open Street Bug report if someone's house may have burned down!

Image
This is more interesting - there's a fire or other emergency right next to a national park. National parks are made from trees, which in my experience tend to catch on a whole lot of fire!

Obviously this sort of incident is well in hand if the fire authorities are aware of it, but it is interesting to see that fuel sources are being picked up.

My next efforts here are going to be around encouraging edits near the area - if there isn't much detail on water sources (dams, swimming pools, rivers, lakes); it certainly makes sense to identify those if there's already been a fire once.

I also made a bit of effort to aggregate the feed output into sqlite - I might go run up a basic server and run this as a service at some later point, as aggregating this data seems useful.

The thing I like most about this - it's trivial to take any GeoRSS feed and query OpenStreetMap for nearby things - I've done fairly wide ranging queries with a narrow bounding box so far; but those are easy enough things to tweak.

Monday, August 06, 2012

Things I miss in ruby

Type hints.
In php I could have type hints to recommend the interface I wanted to be injected into my method or ask for certain dependencies.

Sure, I can do it myself but that seems painful compared to what php gave me out of the box.
You don't have to be a jerk about it, but a little structure can go a long way.

Dependency injection.
For some reason many of the rails community read a post against DI frameworks, and decided it was ok not to do DI. Because if you duck type it a frame work is a philosophy.
The results speak for themselves. Clearly a level seven ruby developer wrote the ruby CAS rails plugin, which had no tests, is a super singleton, and does whatever the hell it wants with your controller. Define two alternative logout uris? Not on its watch!
I admit the design is from 2006 or earlier when not so many people were enlightened, but I am still stuck with it.

On a library designed by people who develop for computer literate universities.

Cut the cord people, stop doing magic and just make a simple utility class or three.

Defensive programming.
There seems to be a jerk who said not to program defensively.
Thanks, that guy. The result is execution of an app continues well pat the danger input point because of a general principle.
Think about it for a minute, with the observer pattern.
Some input happens, events trigger. Now there are fifteen event listeners and they all freak out half way through executing because off some failed expectation.
A simple exception to check the input often points directly at the problem, but in ruby land you just let execution continue in general.
Most dissuading about this, those that discuss it on the Web tend to agree that you should check input and bug out.
The point is not to fail silently. But rails land doesn't care. It is going to make poor code poorly while screaming look at my ruby FU.
If rails developers, by and large left the power offered by ruby well enough alone and coded for other humans, life would be far, far better.
KISS > DRY

Wednesday, June 06, 2012

On Magneto

Lately I've been looking for quite a few small contracts - and it seems that Magneto is coming up quite a lot more than I would have thought.

I'm really excited by the new wave of open source commerce platforms - it's getting a lot closer to building a complete web of products. I've spent a lot of time on freebase, trying to describe content in Good Relations; and with google launching their Knowledge Graph, it seems quite likely that we'll see all of the google product information out there as structured content/content with URIs.

While it's not quite possible to say this is a completely solved linked data issue, we're a lot closer than we were before.

I guess the next areas we will start to see is linked data coming off the front end, and pushing into the backend systems.

This excites me - imagine a supply chain with end to end structured data, distributed queries to suppliers and automated acquisition of products.
Imagine accounting software integrated with your frontend, describing your customers in a structured way and being able to understand that a purchase made on "invoice 123" was for a consumable good.
If your account software stood up a queryable interface, you suddenly find yourself getting quite rich data back - feeding straight into business analysis tools.

Previously, this was the stuff of big data companies - but it seems on the cusp of becoming a reality within the next few years, as linked or services oriented data becomes much more prevalent.
In my last role, this took the creation of a data warehouse and several data marts, plus a dedicated business intelligence tool - and even then, this only got you some of the way, with red tape, politics, wrong/incomplete data modelling and infrastructure issues to fight through. If it's plain old HTTP (as most SPARQL endpoints are), 50% of those issues vanish immediately. If it's web oriented data in a graph form, another 20% of the issues just left as well - leaving most of the problems at the "I've got the data, what's interesting from it" level.

Recently, a friend highlighted that this is already happening in some areas. Web Ninja Magneto is a great example of this. It's not linked data, but it's providing APIs and integration which certainly beat the heck out of previous approaches I've seen - those mostly focused around CSV shaped data and legacy written as extensions to impossible to maintain accounts receivable applications.

If you take a look at what kind of magneto integration is supported, or take the tour, you can see it's already tackling everything you can think of - MYOB (MYOB Retail Manager, MYOB accounting, MYOB Exo), Quickbooks, Access, Excel, Fishbowl Inventory, Attache Accounting, Tencia, Ostendo, Jiwa and more.


Even better from my perspective, Web Ninja Magneto is an Australian company. How neat is that? If it's happening here, it's likely happening elsewhere in other markets too.

With the young interesting company approach, I can see something like this going quite far - and from there, it's only a matter of time before the needs of integration push people towards linked data style approaches.

What else have you seen in the small business world which hints at integration taking place? What else do you see as the potential to gather real business intelligence out of two related internal systems?

Monday, May 28, 2012

Out of a job for the moment, but back into hacking

Ouch, rough week. My position in Adelaide was made redundant just as I started to get some traction. Doh.

This week's focus:

  • Go and dust off ruby, and rails skills, with the intent of building a linked data recipe mashup. Turns out in all of the waiting, someone has just gotten all of the data and done the linked data thing with it; the thing which stopped me dead the last time. Coughing up a workable UI for getfridged should be a snap, and gives me a chance to update my knowledge to Rails 3.0.
    Status: We're already sending pull requests on github, I'll hopefully be back up to speed within 2-3 days.
  • After the rails bit, make sure I can still hammer out python. Something quick and dirty in python/gtk; talking to tracker over dbus might be the go to display neat statistics or information about your local desktop PC.
  • Job applications: I've already put quite a few out there, and a few promising contacts so far.

    I've split it into two areas - happily looking at a wide variety of PHP related work; just to get back to something I love and to continue eating. While that's happening, I'm also putting our feelers around the architecture side of things - I was getting pretty darned good at breaking down communication barriers and Getting All of the Enterprise Java Developers to Talk on Friendly Terms.
    While I would certainly need a crash course to go from "intermediate Java" to "highly productive enterprise Java developer"; I can actually work well in a project role with the design of services, diving deeply into a domain and comprehending a data model.
I'm particularly proud of the work I did - building ValEx for 5-6 years from an idea in a back room of a small Adelaide business into a twice acquired, market leading platform.

I really ramped it up when my focus changed from ValEx only to RP Data transitioning to a SOA - I kicked down barriers, established communication between Dev, QA & business teams spread across Adelaide (2 teams), Brisbane (3-4 teams?), Sydney (2-5 teams/projects/etc?) & Offshore (2 teams?), and wore no less than 3 hats at all times (Solution Designer,  Developer, Project Manager, Data Modeller, Architect, Tinker, Tailor, but not yet Candlestick maker)

Some of the projects I did included:
  • Rewriting & integrating an existing RP Data product in a SOA fashion, integrating with 3 different legacy platforms.
  • Delivering parts of the RP Data consumer strategy - first in New Zealand back in 2011, and a much delayed linking of legacy platforms which I had been championing for some time more recently
  • Fixing the address search problems within RP Data by an extra 8% or so, and advocating for a practical implementation of the next steps.
  • Providing guidance to LIXI on improvements to the LIXI Valuation Standards, given the updated release of the Property Pro Supporting Memorandum 2012
  • Providing detailed technical guidance on how LIXI can meaningfully adopt OWL or Linked Data related standards, continuing the discussion started by NICTA in 2008.
  • Establishing stable identifiers on a number of diverse data sets, as initial steps towards linked data in the enterprise; amid a number of legacy ETL processes and other quirky things.
  • Providing real world, usable QA controls (assess a transaction against external data sets, route and alert humans to abnormal information) based on linked data in a fairly complex workflow system.
  • Finally delivered an incremental business intelligence improvement, plus helped coax some of the operational reporting onto something more like a SDLC, mostly by sending people pictures of Yetis whenever an SQL fragment was emailed, not put under version control. Pro-tip: Don't feed the SQL Yeti.
... and that's just the last twelve months of what I can remember. My biggest fault was not being able to explain it consistently and clearly enough - there barely seemed to be any time!

I miss the ValEx development work more though. I fondly remember making it snow over Christmas (which then promptly nuked the performance of all FF2.0 or lower users, oops); of hacking code all day but finding the really interesting problems to solve only at the union hotel after hours; of putting in the hard yards to get two major and several other important banks integrated, and efficiently working; of being able to walk into any section of the operational business and having friends who I could help by making computers work for them, every day.

I guess that interaction paid off well - there was such a rapid outpouring of support from everyone in Adelaide - from when I found out Thursday morning, and word spread a bit; right up to the last few moments on Friday.
I could not seem to make it ten minutes without someone offering a heartfelt condolence, or pushing a drink into my hand, or offering to help me get my foot in the door somewhere.


Sunday, November 13, 2011

Managing multiple job configurations for Jenkins

If you are in the same boat as I am, you find you have too many packages to look after with Jenkins.

The beauty of Jenkins is the simplicity at setting up a job with the web frontend - but once you get over a certain level of complexity this is actually one of the bigger drawbacks.

Sure, we've got some templates, but how far can you really stretch it?

In my situation, I need to:

  1. Trawl SVN/other version control for all packages available - several hundred
  2. Only if the package has tests, add an entry to the CI suite
  3. Adapt to packages which require E_ALL & ~E_STRICT to run happily under that
  4. Packages which require dependencies, but can't be installed, still need a mechanism to install said dependencies
  5. And some which need to be invoked with the legacy AllTests.php
  6. Detect when a package has migrated to github
  7. ... and update an existing build/job with a new tool when required
I had tackled part 1 with pear's "packages-all" SVN link, which pointed to the trunk branches of all relevant code, and written some scripts for cruisecontrol to find all directories with a /tests/, but I find myself in need of something more.

So, my code is on github for now, and you can see the current CI system where those scripts have installed new jobs.

I'm quite sure that pyrus and a local installation will deal with the dependencies; as they are all described with PEAR's package.xml format. Also; detecting when a package has shifted to github should be fairly easy to tackle, as there is much work underway to deal with migration.

The one area I need to explore is manipulating jenkins jobs via xpath, to understand what parts of a job are already present and what need updating - basically number seven in the above list.

I'm curious who's done this sort of thing before, regardless of language, and if there are any libraries which make it easier to do this sort of thing.


Saturday, March 19, 2011

XML_GRDDL, BestBuy & Good Relations

Digg used to publish rdfa, but it appears to have given it the boot.

So who is out there publishing useful rdfa? Best Buy of course.

While they appear to have sold out of their example rdfa product you can still get a heck of a lot of data out about the store itself.

The code:

$url = 'http://stores.bestbuy.com/577/fairless-hills-pa/products/open-box/frigidaire-30-freestanding-range/0012505540066/?uid=118';

$options = XML_GRDDL::getDefaultOptions();
$options['log'] = Log::singleton('console');
$grddl = XML_GRDDL::factory('xsl', $options);

$data = $grddl->fetch($url);

$data = $grddl->appendProfiles($data, array('http://ns.inria.fr/grddl/rdfa/'));

$stylesheets = $grddl->inspect($data, $url);

$rdfXml = array();
foreach ($stylesheets as $stylesheet) {
$rdfXml[] = $grddl->transform($stylesheet, $data);
}

$result = array_reduce($rdfXml, array($grddl, 'merge'));

print $result;



The result? 80 or so triples come out describing everything from the facebook account of the store; the geolocation; the address; the telephone; their email; their opening hours and more.

Give it a go yourself:

$ pear install -f XML_GRDDL
$ cd /usr/share/php/doc/XML_GRDDL/docs
$ php bestbuy-rdfa.php | less

Sunday, March 21, 2010

Redland in PHP / Ubuntu

The PHP docs are a little lacking; so...

Install in Ubuntu: apt-get install php5-librdf


roll: great, the redland php bindings are also in darwin ports. just $port install redland-bindings +php5. time to integrate it in my project :)



Checking its installed:


clockwerx@clockwerx-desktop:~$ php -i | grep rdf
redland librdf version => 1.0.9


How do you use it? Here's a port of example1.c

<?php
$world=librdf_new_world();
librdf_world_open($world);

$uri=librdf_new_uri($world, "http://librdf.org/bindings/bindings.rdf");
if(!$uri) {
die("Failed to create URI\n");
}

$storage=librdf_new_storage($world, "memory", "test", NULL);
if(!$storage) {
die("Failed to create new storage\n");
}

$model=librdf_new_model($world, $storage, NULL);
if(!$model) {
die("Failed to create model\n");
}

$parser_name = "";
$parser = librdf_new_parser($world, $parser_name, NULL, NULL);

if(!$parser) {
die("Failed to create new parser\n");
}

function librdf_uri_as_string($uri) {
return (string)$uri;
}

print "Parsing URI " . librdf_uri_as_string($uri) . "\n";
if(librdf_parser_parse_into_model($parser, $uri, $uri, $model)) {
die("Failed to parse RDF into model\n");
return(1);
}
librdf_free_parser($parser);


$statement2 = librdf_new_statement_from_nodes($world, librdf_new_node_from_uri_string($world, "http://www.dajobe.org/"),
librdf_new_node_from_uri_string($world, "http://purl.org/dc/elements/1.1/title"),
librdf_new_node_from_literal($world, "My Home Page", NULL, 0)
);
librdf_model_add_statement($model, $statement2);

/* Free what we just used to add to the model - now it should be stored */
librdf_free_statement($statement2);


/* Print out the model*/

print "Resulting model is:\n";
print librdf_model_to_string($model, $uri, "");

/* Construct the query predicate (arc) and object (target)
* and partial statement bits
*/
$subject=librdf_new_node_from_uri_string($world, "http://www.dajobe.org/");
$predicate=librdf_new_node_from_uri_string($world, "http://purl.org/dc/elements/1.1/title");
if(!$subject || !$predicate) {
die("Failed to create nodes for searching\n");
}
$partial_statement=librdf_new_statement($world);
librdf_statement_set_subject($partial_statement, $subject);
librdf_statement_set_predicate($partial_statement, $predicate);


/* QUERY TEST 1 - use find_statements to match */

print "Trying to find_statements\n";
$stream=librdf_model_find_statements($model, $partial_statement);
if(!$stream) {
die("librdf_model_find_statements returned NULL stream\n");
} else {
$count=0;
while(!librdf_stream_end($stream)) {
$statement=librdf_stream_get_object($stream);
if(!$statement) {
die("librdf_stream_next returned NULL\n");
break;
}

echo(" Matched statement: ");
print librdf_statement_to_string($statement);
print "\n";

librdf_stream_next($stream);
$count++;
}
librdf_free_stream($stream);
print "got " . $count . " matching statements\n";
}


/* QUERY TEST 2 - use get_targets to do match */
print "Trying to get targets\n";
$iterator=librdf_model_get_targets($model, $subject, $predicate);
if(!$iterator) {
die("librdf_model_get_targets failed to return iterator for searching\n");
}

$count=0;
while(!librdf_iterator_end($iterator)) {

$target=librdf_iterator_get_object($iterator);
if(!$target) {
die("librdf_iterator_get_object returned NULL\n");
}

print " Matched target: ";
print librdf_node_to_string($target);
print "
";

$count++;
librdf_iterator_next($iterator);
}
librdf_free_iterator($iterator);
printf("got %d target nodes\n", $count);

librdf_free_statement($partial_statement);
/* the above does this since they are still attached */
/* librdf_free_node(predicate); */
/* librdf_free_node(object); */

librdf_free_model($model);

librdf_free_storage($storage);

librdf_free_uri($uri);

librdf_free_world($world);

Saturday, June 20, 2009

PEAR in June

June is here, and things are beginning to pick up again.

We've welcomed Rodrigo Sampaio Primo, probably better known for his efforts with TikiWiki and elsewhere, Peter Bittner joined us to feed back some of his Open Document improvements, we've seen the feature and bugfix releases of Services_Amazon_SQS, Net_LDAP2, Console_Commandline, XML_Serializer, PHP_UML, Payment_DTA, Net_UserAgent_Detect, Net_DNS, Services_Facebook, Testing_DocTest and Net_Nmap.

Christian Weiske has been working on getting Open Document back into shape, Greg Beaver is once again helping us move forward to elect a new PEAR group, as well as getting the next version of the PEAR installer ready for testing.

Slightly worrying, we haven't heard much from Amir since the elections in Iran, and he hasn't been on IRC.

PHP 5.3 isn't far off, and I think it's fair to suggest that we've all got a subdued sense of excitement about it. That, and the consumption of a metric tonne of meat.



Reblog this post [with Zemanta]

Friday, April 24, 2009

Handy hint for unit tests

We've got loads of unit tests. A run takes approximately 20 minutes.

This is because we've got a lot of database interaction, and the re-engineering effort required to go back and mock that all out is immense.

So, what's the best way to make sure you catch problems quickly?

In your AllTests.php, make it a policy to put new test suites at the top, rather than the bottom.


Basically:
    public static function suite() {
$suite = new PHPUnit_Framework_TestSuite();

$suite-<addTestSuite('NewTest');
$suite-<addTestSuite('CoreTest');

return $suite;
}


This is the opposite thinking than "write your most important test cases first", but it helps you find the newest broken features.

Reblog this post [with Zemanta]

Friday, February 20, 2009

Good habit: in_array()'s third param

I got lulled into a relaxed state of mind with using in_array() to guard against input.

I had a validation method like:

$valid_types = array(0,1,2,3,4);

$type = 'string string string';

var_dump($type);
var_dump($valid_types);

var_dump(in_array($type, $valid_types));
var_dump(in_array($type, $valid_types, true));


Without executing it, what do you think happens?

I thought: bool(false), bool(false).

WRONG! in_array() does type conversion, so (int)"string string string" is 0; and yes, that's in our array.

So; to avoid surprises, always supply the strict param to in_array().

Its also a good thing to keep an eye on with code review.

Tuesday, February 10, 2009

PEAR bug day roundup - Feb 7th 2009

Here's a quick list of things done at/around the last bug triage day.

Accomplishments:
* Triaged the latest 50 bugs - doconnor
* Knocked off parse error related bugs - doconnor
* Updated unit tests to PHPUnit 3 for I18Nv2 - doconnor
* More unit tests fixed in PEAR 1.8 - dufuz
* Added Image_JpegXmpReader into CVS - doconnor
* Added Validate_HU into CVS, marked as unmaintained, removed 2x releases - doconnor
* Math_Finance got added to CVS - doconnor
* Validate got a new release - amir, davidc
* HTML_Page2 got a new release - doconnor
* Crypt_Rc4 bug fixes - kguest
* pearweb password bug - cweiske
* pearweb deployment and regression - cweiske / dufuz / doconnor

Reblog this post [with Zemanta]

Sunday, January 25, 2009

Using Image_Graph neatly

Here are my two best tips around using Image_Graph for projects. They aren't necessarily right, but have worked fantastically for me.

Use it like Google Chart API (on demand)


Build a simple page which takes a number of arguments via GET variables, and serves up an image. You can then use simple commands to render whatever you like.

# Rendering code:
require_once 'Net/URL.php';
function make_graph_url($data) {

$url = new Net_URL('graph.php');
$url->gt;querystring['data'] = $data;
$url->gt;querystring['type'] = 'pie';
return $url->gt;getURL(); // "graph.php?type=pie&data[Cats]=1&data[Fish]=2";
}

# HTML / Presentation bit
<img src="%3C?php%20print%20make_graph_url%28$data%29;%20?%3E" alt="Graph of Cats and Fish" />

#Graph.php
require_once 'Image/Graph.php';

$graph = new Image_Graph();
// read in $_GET and construct your graph

$graph->gt;done();


Its worth thinking about maintaining a pretty similar approach to google's API, so that you can swap one for the other almost trivially.

Pre-rendering


Say you have a set of reports you must run. The amount of data is huge, so you really don't want to try and do things on the fly. You have to update the data periodically - ie, once a week or month.

Steps here:
1. Denormalize in the database - precalculate answers and render them into tables. It will save you loads of time.
2. When you have the data you need, pre-render the graphs and save them to disk. Do it with an easy naming scheme.

Now when someone hits your pages to look at information, you've got everything already there - its a matter of wiring it together.


These two things are pretty obvious and self explanatory, but worth keeping in mind. The last thing you want to do is build a page which assembles data, then realize Image_Graph renders in a different stream (ie, not inline), and resort to copy and paste coding.


Reblog this post [with Zemanta]

Tuesday, January 20, 2009

PEAR Bug Triage Day Results (December 17-18th)

Here's the results of Bug Triage, Dec 17-18th and a few days surrounding it.

Accomplishments


  • Triage the latest 50 bugs
  • Crypt_GPG - gauthierm fixed broken unit tests
  • Services_Amazon_SQS - gauthierm is making progress on mocking out all HTTP
  • Validate_BE unit tests fail - doconnor fixed
  • HTTP_Session - doconnor made it skip if it can't possibly pass due to the unit test environment
  • XML_Feed_Parser - doconnor excluded from unit test results (too much noise for too little benefit)
  • PEAR 1.8 - fixing up unit tests - dufuz
  • Text_Wiki_Creole saw a release
  • Services_ExchangeRates 0.6 - doconnor did a release; mocked out all unit tests, tweaked API
  • MDB2 / MDB2_Driver_mysql - a new beta was put out! Lots of bugs fixed since the last release
  • New pear server was tested, lots of little patches, not quite ready yet! - doconnor, dufuz, cweiske, farell


If you missed it, the next one will be Feb 7-8th.

Reblog this post [with Zemanta]

Thursday, January 15, 2009

Bug Triage, 17th/18th Jan

Hey all, get your bug triage gloves on!

This weekend in #pear and #pear-bugs

Overall goals:
* Triage latest 50 bugs
* Break the new pearweb server as much as possible so we can have shiny new pearweb releases
* If you have a package which is package 1.0; upgrade it
* New releases which would be handy (look at, talk to maintainers, etc)
* Services_ExchangeRates 0.6 - doconnor
* Numbers_Words
* Image_Graph - doconnor emailed on 27th
* DB_DataObject - troehr emailed on 27th
* Validate - troehr emailed on 27th
* Image_Transform - troehr emailed on 27th, waiting for feedback from Philippe Jausions (appx Jan 1)
* Image_Canvas - doconnor emailed on 27th
* HTML_Page2 - troehr emailed on 27th
* HTTP_WebDAV_Client - doconnor emailed on 27th
* HTTP_WebDAV_Server - doconnor emailed on 27th
* Net_SmartIRC - doconnor emailed on 27th
* SQL_Parser - doconnor emailed on 27th
* Spreadsheet_Excel_Writer - troehr emailed on 27th, doconnor suitably scared
* Mail_Mime - avb and lifeforms (walter) are on it
+ http://pear.php.net/bugs/bug.php?id=11238 - reasonably high impact - needs much refactoring, probably a Mail release too.
* Math_Finance - get it into cvs - doconnor
+ Usage examples would be neat
* Contact_Vcard_Build
* Contact_Vcard_Parse

I'll be around most of Saturday and Sunday during Australian daylight hours (GMT+0930); so if anyone wants to jump in and hassle people during the US/European daylight hours, feel free!

Monday, December 29, 2008

PEAR bug triage roundup - Dec 27th/28th

We had PEAR bug triage on the 27th/28th.

I'd expected this to be a quiet one, but CVS activity was actually pretty heavy!

We accomplished:
* XML_Feed_Parser tests got added (1500 unit tests)! - doconnor
* HTTP_Upload parse errors fixed - doconnor
* Net_SMPP parse errors fixed - doconnor
* Net_Whois bugfix release - doconnor
* Massive improvements to PEAR_PackageFileManager tests - dufuz
* Auth_Prefmanager tests now skip if not configured - doconnor
* HTML_Template_IT 1.3.0a1 released - doconnor
* Image_Color 1.0.3 released - doconnor
* MP3_Playlist - phpcs - doconnor
* Net_IPv6 got into the pear test suite - doconnor
* Started Services_Akismet2 - gauthierm
* Started the process for new releases of DB_DataObject, HTML_Page2, HTTP_Upload, HTTP_WebDAV_Client, HTTP_WebDAV_Server, Image_Canvas, Image_Graph, Image_Transform, MDB2, MDB2_Driver_mysql, Mail_Mime, Net_SmartIRC, SQL_Parser, Spreadsheet_Excel_Writer, Validate - doconnor, troehr

The most important one here would be PEAR_PackageFileManager improvements - this is part of getting PEAR 1.8 out of the door.

Coming in second was the addition of 1500 or so tests with XML_Feed_Parser - unfortunately, we went from 145 failures to over 1000. The benefit of this: You can really see where PHP / libxml have a few holes, so over time, more bugs will be filed and this will improve.

Unfortunately, overall, it felt like we just ended up with more work on our plates as we unravelled bug after bug - so we'll power on through at the next bug triage day!

Reblog this post [with Zemanta]

Friday, December 19, 2008

WTF: Refactoring snippet of the day

<?php
class SortingClassNameHere {
public function __construct($string) {
$this->string = $string;
}

private static $c = null;

public static function cmp($a, $b) {
$r = strnatcasecmp($a[self::$c], $b[self::$c]);
return ($r > 0 ? 1 : ($r < 0 ? -1 : 0));
}

public function process($data) {
self::$c = $this->string;
if (!usort($data, array("SortingClassNameHere", "cmp"))) {
throw new Exception('Unable to sort results.');
}

return $data;
}
}


Hints:
* SortingClassNameHere::cmp() is never called anywhere else in the code base apart from process()
* If you don't know why this is bad, I will shoot you.

Reblog this post [with Zemanta]

Wednesday, December 10, 2008

PEAR Bug Triage Day Results (December 6th)

PEAR's December Bug Triage Day was alright, if somewhat quiet. It was actually more spread out over the preceding weeks rather than much on the day itself.

Christian got a new version of Services_Blogging out earlier in the month, while Console_GetArgs and Numbers_Words got updated and increased unit test coverage - just about everyone under the sun chipped in and wrote translated unit tests for Numbers_Words - Igor, Christian, Lorenzo, Kouber, David and anyone else I missed.

Payment_DTA got a new owner in Martin Schutte, which saw a good few bug fixes applied.

I fixed up Text_Figlet, which broke PHP 4 compatibility, and got out a release of it and Services_Yadis.

Finally, Validate got back in the unit test good books, with almost all unit test failures resolved.

The next one is calendared for December 28th, so we'll see how that goes :)



Reblog this post [with Zemanta]

Saturday, November 29, 2008

Bug Triage Day - December 6th

Hello all!

It's almost Bug Triage Day again, which will be taking place on December 6th/7th

If you are a user of PEAR and have wanted to contribute to an open source project, here's a great opportunity.

We run a bug day every 3 weeks or so, on irc.efnet.org #pear & #pear-bugs across two days. We basically try to improve the quality of incoming bugs reports and write test cases, make sure they are reproducible problems.

We also try to boost overall code quality - unit tests, documentation and other improvements.

If you've got a package you use commonly, or bugs you filed some time ago that haven't been fixed; this is a good time to get involved.

Join us on Dec 6th/7th; or pop into irc.efnet.org #pear to say hello and ask questions; or respond here!




My main focus:
Text_Wiki is a heavily used package which could use a little love!

There are numerous feature requests and bugs open for it which could no doubt use some attention:
http://pear.php.net/bugs/search.php?cmd=display&package_name[]=Text_Wiki&status=Open

Other pear developers, if you've got a specific target in mind, chime on in!

Here's what happened at the last one.