I'm not very good at or about writing documentation. I have some ideas on what I think documentation should be. I think documenting interfaces is more important than documenting implementations. I like documentation systems like Javadoc, since it is right with the code and it allows boilerplate to be automatically generated, and encourages the documentation of interface over implementation.
For external documents, there's often the essential information, and then there's lots of verbiage. I know how to stick in the essential information. I don't know how to generate the verbiage.
Also, where I worked in the past, documents had to be in Microsoft Word format, and were emailed around. I didn't like using Microsoft Word. If I could get away with it, I'd write the documentation in a plain text file using emacs.
Nowadays, the documentation is on Atlassian Confluence, which I greatly prefer over Microsoft Word documents. The Confluence search is horrible, but, other than that, I think it's pretty good. There was an intermediary period where documentation was either in Microsoft Word or in Confluence, or in both, and in discussions on where it should be, I'd always vote for Confluence.
Wednesday, July 15, 2009
Monday, July 13, 2009
I first started using Linux in January 1991. I downloaded the bootdisk and rootdisk of version 0.11 from tsx-11.mit.edu. My computer had 2 megabytes of RAM and a 40 megabyte disk, and a 16MHz processor and a 2400 baud modem, so it was super slow. The lack of RAM was the biggest issue, and gcc would be swapping forever to build anything.
So, what I did was to build gcc on the university SunOS computer (which had replaced an Ultrix computer a year or two earlier) to cross-compile to 80386. That worked pretty well. I also tried to build gas to cross-assemble to object files, but that failed. But gas was fast enough on my computer, so, whenever I wanted to build anything, I'd download it to the university computer and cross-compile it, then download the .s files to my Linux computer, and assemble and link it. It beat waiting hours for gcc on my Linux computer. That's how I built nethack, which I played quite a bit back then.
Back then, I thought I'd be switching to some GNU operating system eventually. After all, Linux was 386-only at the time. But nowadays, Linux is very widely used. My main computer at work now runs Linux, though I also have one with Microsoft Windows, as the company email and meetings are on Microsoft Exchange. The production application servers are all Linux, though the databases are Solaris.
So, what I did was to build gcc on the university SunOS computer (which had replaced an Ultrix computer a year or two earlier) to cross-compile to 80386. That worked pretty well. I also tried to build gas to cross-assemble to object files, but that failed. But gas was fast enough on my computer, so, whenever I wanted to build anything, I'd download it to the university computer and cross-compile it, then download the .s files to my Linux computer, and assemble and link it. It beat waiting hours for gcc on my Linux computer. That's how I built nethack, which I played quite a bit back then.
Back then, I thought I'd be switching to some GNU operating system eventually. After all, Linux was 386-only at the time. But nowadays, Linux is very widely used. My main computer at work now runs Linux, though I also have one with Microsoft Windows, as the company email and meetings are on Microsoft Exchange. The production application servers are all Linux, though the databases are Solaris.
Friday, July 10, 2009
I first started using source control when I was a student. I started with rcs. I had also read about sccs, but rcs was free and it was available. I liked having a history of changes and the ability to get at old versions of files, though I didn't fully appreciate its value at the time. A while later, I started using cvs on projects worked on by 2-4 people. This was where CVSROOT was on the filesystem.
Then, I took a job that used Microsoft Visual SourceSafe. It had its advantages and disadvantages compared to cvs. I disliked the interface, though. From time to time, I tried to figure out how to use it from the command line, but never really got anywhere.
A while later, the source control got switched to cvs (client-server). I really liked that change. Mainly because I could then do away with having to work on Microsoft Windows at all.
Where I work now uses perforce, which I like. It's a modern source control system with a command-line interface, and there's a nice emacs package for it that I also use often.
I imagine other modern source control systems are pretty much like perforce, but the most I've ever done with them is to download source code from public subversion repositories. I've also played with GNU arch a little, but with just a local repository. It was like going back to rcs, in a sense.
Then, I took a job that used Microsoft Visual SourceSafe. It had its advantages and disadvantages compared to cvs. I disliked the interface, though. From time to time, I tried to figure out how to use it from the command line, but never really got anywhere.
A while later, the source control got switched to cvs (client-server). I really liked that change. Mainly because I could then do away with having to work on Microsoft Windows at all.
Where I work now uses perforce, which I like. It's a modern source control system with a command-line interface, and there's a nice emacs package for it that I also use often.
I imagine other modern source control systems are pretty much like perforce, but the most I've ever done with them is to download source code from public subversion repositories. I've also played with GNU arch a little, but with just a local repository. It was like going back to rcs, in a sense.
Wednesday, July 8, 2009
One of the times where the motivation to work on code is lower than I'd like is at the beginning of a project, where nothing is there. This happens quite a bit with personal projects. At work, starting projects from scratch is very rare. One problem is that there are lots of things that I have ideas about, but there is no framework that they'll fit in. Once the framework is in place, the motivation to work on something because much higher, because, when I have an idea, I can get straight to work on it and see how it works.
I generally get started by working bottom-up, making some components that I think I'll be using. Once I have some components, I'll work top-down, building out a skeleton framework. Once a sufficient framework is in place, it gets a lot more fun, where I can add stuff and test it out immediately.
The Spring Framework inversion of control container is really helpful throughout this process. I can make an initial implementation of a component by hard-coding its return values. After testing the framework around that component, I can swap the implementation for another test implementation that saves everything in memory. After that, I can swap it for a real implementation that stores stuff in a database. All that is also possible using hand-coded factory methods, but with the Spring Framework, it is available at every level -- each component can be injected with other components that can be swapped in or out in the configuration file, all without having to code up more factory methods.
I generally get started by working bottom-up, making some components that I think I'll be using. Once I have some components, I'll work top-down, building out a skeleton framework. Once a sufficient framework is in place, it gets a lot more fun, where I can add stuff and test it out immediately.
The Spring Framework inversion of control container is really helpful throughout this process. I can make an initial implementation of a component by hard-coding its return values. After testing the framework around that component, I can swap the implementation for another test implementation that saves everything in memory. After that, I can swap it for a real implementation that stores stuff in a database. All that is also possible using hand-coded factory methods, but with the Spring Framework, it is available at every level -- each component can be injected with other components that can be swapped in or out in the configuration file, all without having to code up more factory methods.
Monday, July 6, 2009
The first bug tracking system I used had some Microsoft Window only client. I think it was called Track. Its database got corrupted too often. The next one I used was Bugzilla, which was a huge improvement. Plus, I didn't have to use Microsoft Windows in order to use it. The next one was Rational ClearQuest. It had a web interface, but it, rather pointlessly, I thought, used Java applets. It got replaced by Jira, another huge improvement, and is still being used.
When I get assigned a bug, it's sometimes clear from the report what code needs to be fixed. Most of the time, it's not.
Sometimes, logs are attached to the bug, and those are sometimes enough to determine what the fix should be. Sometimes there aren't logs attached, or the logs attached aren't. If the bug was filed by QA, then I go to the QA system and look at the logs there. The logs on the QA systems usually go back a week or two, and the bugs are usually assigned to me three or four days after they were filed. I can usually go back and see more context if the logs were attached, or try to guess at what logs from the time not too long before when the bug was filed was is applicable.
Most of the developers where I work will wipe out the logs when reproducing bugs, probably because they consider old logs clutter. But, it does mean losing some context, and sometimes when they come to me for help, the logs they've wiped out would have been helpful. I've never wiped out the logs on my systems, and have logs going back for years.
Finally, once all else fails, I'll try to reproduce the bug on my system.
When I get assigned a bug, it's sometimes clear from the report what code needs to be fixed. Most of the time, it's not.
Sometimes, logs are attached to the bug, and those are sometimes enough to determine what the fix should be. Sometimes there aren't logs attached, or the logs attached aren't. If the bug was filed by QA, then I go to the QA system and look at the logs there. The logs on the QA systems usually go back a week or two, and the bugs are usually assigned to me three or four days after they were filed. I can usually go back and see more context if the logs were attached, or try to guess at what logs from the time not too long before when the bug was filed was is applicable.
Most of the developers where I work will wipe out the logs when reproducing bugs, probably because they consider old logs clutter. But, it does mean losing some context, and sometimes when they come to me for help, the logs they've wiped out would have been helpful. I've never wiped out the logs on my systems, and have logs going back for years.
Finally, once all else fails, I'll try to reproduce the bug on my system.
Friday, July 3, 2009
The problem with restructuring a bunch of code to make adding a bunch of features much cleaner is that management wants some those features that I added after restructuring the code in an a branch that was made before the restructuring. And of course, the restructuring can't go into that branch.
That's what happened to me recently. Version 1.x had been branched off, and I did lots of restructuring of the main branch for version 2.0, as well as adding a bunch of features. Version 1.x.y was frozen except for critical bug fixes, but version 1.x.z given an extended schedule, and some features I put in 2.0 now need to be 1.x.z, and the implementations of most of those features depend in the restructuring to be done cleanly. I think I'll be hacking in throwaway implementations for a lot of them in 1.x.z.
That's what happened to me recently. Version 1.x had been branched off, and I did lots of restructuring of the main branch for version 2.0, as well as adding a bunch of features. Version 1.x.y was frozen except for critical bug fixes, but version 1.x.z given an extended schedule, and some features I put in 2.0 now need to be 1.x.z, and the implementations of most of those features depend in the restructuring to be done cleanly. I think I'll be hacking in throwaway implementations for a lot of them in 1.x.z.
Wednesday, July 1, 2009
One thing I've wished for from time to time when using Java is tuples and maybe some syntactical sugar on top of it for returning multiple values. This happens when I have a method that needs to return one more thing.
One solution I've used in the past was to pass in an array, and put the value to be returned in the array. It's really ugly, and it's not my preferred way of doing things.
What I do now is declare a new class to hold the multiple values.
It would be nicer to be able to be able to declare something like this
and then call it with
or
where a, b, and c are ints in this example.
One solution I've used in the past was to pass in an array, and put the value to be returned in the array. It's really ugly, and it's not my preferred way of doing things.
What I do now is declare a new class to hold the multiple values.
It would be nicer to be able to be able to declare something like this
Tuple<Integer,Integer,Integer> get3Numbers();
and then call it with
a, b, c = get3Numbers();
or
(a, b, c) = get3Numbers();
where a, b, and c are ints in this example.
Subscribe to:
Posts (Atom)