Windows Media Connect vs Windows Media Center Extender
Good, short article on differences!
SlimServer and PRISMIQ Player:
You can experiment with combining the PRISMIQ MP and the SlimServer from Slim Devices (It's the server software which runs the SliMP3 and SqueezeBox.). After you install and configure it you should be able to browser your audio file collection hierarchically by Artist, Album, Genre, or File name. You can also enter a search string for Artist, Album, and Track title. I can play OGG files; I can view album art, etc. All while sat in front of your TV, rooms away from my PC. And the audio interface looks great.
- You can download and install the SlimServer software, which is available free of charge from here --> http://www.slimdevices.com/su_downloads.html
- During the install process SlimServer will ask for the location of you music files.
- Once installed you have to go to the PRISMIQ Media Manager and create a new Internet Radio station http://192.168.0.100:9000/stream.mp3 and a new web bookmark http://192.168.0.100:9000 . 192.168.0.99 being the ip address of the SlimServer.
- You can use the web browser on your PC to open http://192.168.0.99:9000 and change the skin (yes it's fully skin-able too) to use the 'Handheld' skin. You could have done this from the PRISMIQ web browser.
- Next go to your TV with PRISMIQ MP running.
- First, launch the Internet radio station you created then open the bookmark you created.
- From there it is pretty straight forward. You can browse the music that you want to play, click on the play button after a second or two the music start playing.'
Volume Shadow Copy - is this an alternative or complement to a RAID-based file server?
What Is Volume Shadow Copy Service?
The Volume Shadow Copy Service provides the backup infrastructure for the Microsoft Windows XP and Microsoft Windows Server 2003 operating systems, as well as a mechanism for creating consistent point-in-time copies of data known as shadow copies.
Previous to the Volume Shadow Copy Service and its standard set of extensible application programming interfaces (APIs), there was no standard way to produce “clean” (uncorrupted) snapshots of a volume. Snapshots often contained corruptions due to “torn writes” that required the use of utilities such as Chkdsk.exe to repair. Torn writes occur when an unplanned event (such as a power failure) prevents the system from completely writing a block of data to disk. The Volume Shadow Copy Service APIs prevent torn writes by enabling applications to flush partially committed data from memory.
The Volume Shadow Copy Service has native support for creating consistent shadow copies across multiple volumes, regardless of the snapshot technology or application. The Volume Shadow Copy Service can produce consistent shadow copies by coordinating with business applications, file-system services, backup applications, fast recovery solutions, and storage hardware. Several features in the Windows Server 2003 operating systems use the Volume Shadow Copy Service, including Shadow Copies for Shared Folders and Backup.
Setting up and Using the Volume Shadow Copy Service:
"VSS (Volume Shadow Service) is a new feature in Windows Server 2003 that allows you to revert a networked file back to a previous version (or just look at it in an older state, if you wish). "
10-Minute Solution: Using the Volume Shadow Copy
Decrease your volume of help-desk calls with Windows Server 2003.
by Nelson Ruest and Danielle Ruest
For This Solution: Windows Server 2003, Windows XP Professional.
One of the most exciting features of Windows Server 2003 is the Volume Shadow Copy (VSC) service. What's most impressive is that it is fast and easy to implement, and it will have an immediate, positive impact on Help Desk workload because of the way the shadow copy service works with shared folders. The VSC service automatically takes a "snapshot" of the files located in any shared folder where the service has been enabled. These snapshots include an image of the folder's contents at a given point in time.
Single Source Information: An Agile Practice
Locality Of Reference Documentation The LoRD Principle -- Locality breeds Maintainability.
The LoRD Principle -- Locality breeds Maintainability
A few years back, I coined a principle that I call LoRD: LocalityOfReferenceDocumentation. It tries to address the problem of keeping code and documentation consistent with one another and up-to-date. The principle may be stated formally as if it were a Newtonian Law of sorts:
The likelihood of keeping all or part of a software artifact consistent with any corresponding text that describes it, is inversely proportional to the square of the cognitive distance between them.
A less verbose, less pompous description would be simply: Out of sight; out of mind!
Therefore, it is desirable for us to try and minimize the cognitive distance between artifacts and their descriptions! This is not without its presumptions and caveats (which are described later).
Ready Reference for Lubricant and Fuel Performance: "
Ready Reference for Lubricant and Fuel Performance
Driveline Lubricants - Automotive Gear Lubricants
API Gear Oil Designations (See API Publication 1560 for full description)
Service Designations in Current Use
- GL-1 Denotes lubricants intended for manual transmissions operating under such mild conditions that straight petroleum or refined petroleum oil may be used satisfactorily. Oxidation and rust inhibitors, defoamers and pour depressants may be added to improve the characteristics of these lubricants. Friction modifiers and extreme pressure additives shall not be used.
- GL-4 Denotes lubricants intended for axles with spiral bevel gears operating under moderate to severe conditions of speed and load or axles with hypoid gears operating under moderate speeds and loads. These oils may be used in selected manual transmissions and transaxle applications where API MT-1 lubricants are unsuitable.
- GL-5 Denotes lubricants intended for gears, particularly hypoid gears, in axles operating under various combinations of high-speed shock loads and low-speed, high-torque conditions. Lubricants qualified under MIL-L-2105D satisfy the requirements of the API GL-5 specification, although the API designation does not require military approval.
- MT-1 Denotes lubricants intended for nonsynchronized manual transmissions used in buses and heavy-duty trucks. Lubricants meeting the requirements of API MT-1 provide protection against the combination of thermal degradation, component wear and oil seal deterioration. API MT-1 does not address the performance requirements of synchronized transmissions and transaxles in passenger car and heavy-duty applications.
Service Designations not in Current Use
- GL-2 Denotes lubricants intended for automotive worm gear axles operating under such conditions of load, temperature and sliding velocity that lubricants satisfying API GL-1 service will not suffice. Products suited for this type of service contain antiwear additives or film-strength improves specifically designed to protect worm gears.
- GL-3 Denotes lubricants intended for manual transmissions operating under moderate to severe conditions and spiral-bevel axles operating under mild to moderate conditions of speed and load. These service conditions require lubricants having good load-carrying capacities, exceeding those satisfying API GL-1 service but below the requirements of lubricants satisfying API GL-4 service. Lubricants designated for API GL-3 service are not intended for axles with hypoid gears.
- GL-6 Denotes lubricants intended for gears designed with very high pinion offsets. Such designs typically require protection from gear scoring in excess of that provided by API GL-5 gear oils. A shift to more modest pinion offsets and the obsolescence of original API GL-6 test equipment and procedures have greatly reduced the commercial use of these lubricants.
BenzWorld : Viewing a thread: "RE: Whats the importance of Non-Hypoid gear oil?
I had saved this information, this guy is so thorough that it needs no more explanations:
From: email@example.com (Andy Dingley)
Subject: Re: Hypoid vs. Non-Hypoid
Date: Tue, 09 Jan 1996 17:45:49 GMT
Mark <74551.2327@CompuServe.COM> wrote:
>Can someone explain the difference between Hypoid and Non-Hypoid oil?
'Hypoid' is not really a question of oil, so much as a question of gearcutting. Old (1920's) rear axles used straight bevel gears to form the crownwheel and pinion. These had two disadvantage, the pinion shaft meets the crownwheel on its central axis, and the straight cut gears are noisy. By using a more complex 'hypoid' gear tooth shape (if you look at a pinion, the teeth appear twisted) these problems can be addressed. The more gradual engagement of the teeth along their length reduces noise. By careful design of the geometry the pinion can be made to mesh _below_ the axis of the crownwheel. As the centre height of the crownwheel is fixed by the wheel height, this allows the propshaft to be lowered relative to the car body, giving a clearer floorpan and lower centre of gravity for better cornering. Hypoid bevels are now universal in this application. Because of the sliding contact that hypoid gears make, their hydrodynamic contact pressure is higher. To be suitable for use with hypoid gears, a lubricant must be capable of resisting high pressures. Oils with 'EP' ratings (Extreme Pressure) such as EP90 are required. Some brands describe themselves as 'hypoid' instead, a term which is synonymous with EP. GL-5 is a formal API standard for this type of oil (comparable to MIL-L-2105B/C/D)
Pennzoil Frequently Asked Questions: "2. What happens if API GL-5 gear oil is used in an API GL-4 gear oil application?
API GL-4 and API GL-5 products typically use the same extreme pressure (EP) additive system, with the API GL-5 having about twice the concentration of a API GL-4. In service, these additives become active under extreme load and temperature when the protective oil film can be squeezed away. EP additives work by forming wear-resistant compounds with the metal of the gear tooth surface. As the gears mesh, these compounds shield the gear teeth from direct metal-to-metal contact that would cause wear and damage to the gears. If too little of the active additive is present, proper protection would be compromised. Too much of this additive could cause excessive chemical corrosion of the gear surface. If an API GL-5 gear oil is used in a application where API GL-4 gear oil is called for, chemical corrosion of 'yellow metal' components may occur, such as bronze synchronizers, brass bushings, etc. This may lead to shifting difficulties or shortened equipment life. "
Windows XP Explorer Search: Using the "A word or phrase in the file" search criterion may not work
When you search for files that contain text by using the A word or phrase in the file search criterion, the search results may not contain files that contain the text that you specified. For example, .log, .dll, .js, .asp, .xml, .xsl, .hta, .css, .wsh, .cpp, .c, or .h files, or files with no file name extension, may not appear in the search results even if the files contain the text that you specified. This problem may occur even if you specified the file name or type in the All or part of the file name box. "
<... snip ...>
"To configure Windows XP to search all files no matter what the file type, obtain the latest service pack for Windows XP and then turn on the Index file types with unknown extensions option.
If you use this method, Windows XP searches all file types for the text that you specify. This can affect the performance of the search functionality. To do this:
1. Click Start, and then click Search (or point to Search, and then click For Files or Folders).
2. Click Change preferences, and then click With Indexing Service (for faster local searches).
3. Click Change Indexing Service Settings (Advanced). Note that you do not have to turn on the Index service.
4. On the toolbar, click Show/Hide Console Tree.
5. In the left pane, right-click Indexing Service on Local Machine, and then click Properties.
6. On the Generation tab, click to select the Index files with unknown extensions check box, and then click OK.
7. Close the Indexing Service console."
How can I disable the Windows Explorer search assistant in Windows XP?
John Savill's FAQ for Windows:
A. Windows Explorer contains a new search assistant that Microsoft designed to provide a friendlier, easier search experience. The new search tool lets you search for specific file types (e.g., multimedia files, document files), computers, and more. If you prefer the old search tool, perform the following steps:
- Start a registry editor (e.g., regedit.exe).
- Navigate to the HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\CabinetState subkey.
- From the Edit menu, select New, String Value.
- Enter a name of Use Search Asst.
- Double-click the new value, type
in the 'Value data' field to use the old search tool, and click OK.
- Close the registry editor.
Billiga servrar för små företag - Computer Sweden - En del av IDG.se
2005-02-22 08:30 Fujitsu Siemens nya servrar kostar bara drygt 4 000 kronor och ska lämpa sig för småföretagare som i dag använder en pc som server. Enligt Fujitsu Siemens uträkningar är det minst 150 000 mindre, svenska företag som inte använder servrar eller som nöjer sig med att använda en vanlig pc som server. Bland de fördelar man pekar på med att använder en server i stället för pc är att servrar från början är byggda för att vara i gång dygnet runt utan krav på omstartet.
Diskspegling och raid
De nya servrarna Primergy Econel 50 och 200 har stöd för diskspegling med raid samt fyra hårddiskar. Dessutom lovar Fujitsu Siemens att servrarna ska vara ovanligt tysta eftersom mindre företag oftast saknar speciella serverrum.
Econel 50 går att få för 4 400 kronor exklusive moms om man nöjer sig med en Celeronprocessor medan storebror Econel 200, som har stöd för dubbla Xeon-processorer, har ett ingångspris på 8 000 kronor.
Fujitsu Siemens lagrar på band och disk - Computer Sweden - En del av IDG.se: "Fibrecat N20i är namnet på Fujitsu Siemens nya lagringssystem för mindre företag. Produkten är tänkt att användas i lagringsnät och består av fyra sata-diskar på vardera 250 GB."
TNK-BootBlock.co.uk: "Adobe Reader SpeedUp
Adobe Reader SpeedUp is a simple application that was created to help make the loading time of Adobe's Acrobat/Reader software bearable for everyday use. AR SpeedUp only needs to be used once (a process taking only a few seconds) and then your 'Reader will be transformed forever. There are also some tweaking options available. 'w00t!', as the young kids say. "
Java ID3 Tag Library This library reads song information, such as song title, artist, and album, from an MP3 file. It supports ID3v1, ID3v1.1, Lyrics3v1, Lyrics3v2, ID3v2.2, ID3v2.3, and ID3v2.4 tags. MP3 Frame Headers can also be read. There is a FilenameTag, a ID3v2.4 tag that is intelligently derived from the file name. It contains tag synchronization utilities, multiple save options, and easy tag conversion methods.
DateBk5 FAQ's: "Also, the new PIM Databases have some new fields in them (such as the Location field in the calendar database). This information is *NOT* mirrored into the classic PIM databases as there was no provision for storing this information. HOWEVER, PocketMirror has always had the capability of appending the location to the description text, so you may just want to use PocketMirror (www.chapura.com) to synchronize the Datebook database if that particular item is important to you."
F Lock Key Info: "F Lock Key
The F Lock key on the Microsoft Keyboards is a relatively new concept facility for Microsoft Keyboards. Introduced with the Office Keyboard and used in later keyboards, it essentially allows keys to do more than one operation. The F Lock key, depending on its state, will allow either a function key's "normal" operation or a new "enhanced" operation. The initial state of the F Lock key is "off", and in this state, the function keys use their "enhanced" operation.
The F Lock key is a hardware switch in the keyboard. Its state cannot be controlled programmatically. Its default condition is "off". As a result, whenever the keyboard is reset, or loses power, the F Lock key will always be in an "off" state.
For some, this is not a desirable default; some people want "normal" function key operation.
While there is no way to control the F Lock key state programmatically, Windows 2000, Windows XP and Windows Server 2003 provide a method whereby keys can be remapped by using the Scan Code Mapper. The Scan Code Mapper can be used to change the functions of the function keys. Unfortunately, this functionality is not available for earlier Windows versions.
With the application of that knowledge, the two zip files below contain registry files that will alter the key mappings after Windows 2000, Windows XP or Windows Server 2003 is started up. When F Lock key is in the "off" position, the function keys will have their "normal" function, and when the F Lock key is in the "on" position, the function keys will have their "enhanced" function. Note that these files will only alter the functionality of your function keys (for those who have Natural Multimedia Keyboard, Multimedia Keyboard, Wireless Optical Desktop Keyboard, Wireless Optical Desktop Pro Keyboard, Basic Wireless Optical Desktop Keyboard or Wireless Desktop Elite Keyboard, the function(s) of the PrtScn/Insert & Pause/ScrLk keys will not be changed)."
In Relation To...: EJB3 (by firstname.lastname@example.org)
Yesterday, Linda DeMichiel announced the changes coming in EJB 3.0. There was a lot to digest in her presentation, and I think it will take a while for people to figure out the full implications of the new spec. So far, most attention has focused upon the redesign of entity beans, but that is most certainly not all that is new! The expert group has embraced annotations aggressively, finally eliminating deployment descriptor XML hell. Taking a leaf from Avalon, Pico, Spring, Hivemind, etc, EJB will use dependency injection as an alternative to JNDI lookups. Session beans will be POJOs, with a business interface, home objects have been eliminated. Along with various other changes, this means that EJB 3.0 will be a much more appropriate solution for web-based applications with servlets and business logic colocated in the same process (which is by far the most sane deployment topology for most - but not all - applications), without losing the ability to handle more complex distributed physical architectures.
Why Data Models Shouldn't Drive Object Models (And Vice Versa)
Bringing data professionals and application developers together.
by Scott W. Ambler, Copyright 2003
This essay is taken from Chapter 3 of Agile Database Techniques.
A common problem that I run into again and again is the idea that a data model should drive the development of your objects. This idea comes in two flavors: your physical data schema should drive the development of your objects and that a conceptual/logical data model should be (almost) completely developed up front before you begin to design your objects. Both of these views are inappropriate for non-agile projects and clearly wrong for agile projects. Let’s explore this issue in more depth.
Why do people want to base their object models on existing data schemas? First, there is very likely a desire to reuse the existing thinking that went behind the current schema. I’m a firm believer in reusing things, but I prefer to reuse the right things. There is an impedance mismatch between the object and relational paradigms, and this mismatch leads object and data practitioners to different designs. You also saw in Object Orientation 101 that object developers apply different design techniques and concepts than the techniques and concepts described in Data Modeling 101 that data modelers apply. Second, the database owner seeks to maintain or even enhance their political standing within your organization by forcing you to base your application on their existing design. Third, the people asking you to take this approach may not understand the implications of this decision, or that there are better ways to proceed.
Object Relational Mapping Tools
There is a large and growing number of projects that map relational databases into object models. This paper provides a basic review of each of the projects of which the author is aware. It is hoped that the it will prove useful to anyone that is trying to select such a tool.
The key contribution of this work is to discuss the core technical problems that need to be solved, and then classify the tools according to how they attempt to solve these problems. It is therefor focussed on the general design of each tool rather than specific performance or reliability issues.
Due to the large number of ORM tools most of these reviews are based on reading product documentation rather than direct experience using each tool. The reader should also note that the author is developing the www.SimpleORM.org tool and so has a natural bias towards its approach. However, the classification of approaches and general list of the tools should be helpful to anyone trying to select an Object Relational Mapping tool.
Implementing The Persistence Layer
6. Implementing The Persistence Layer
There are several issues that you need to be aware of with persistence layers if you wish to be successful.
These issues are:
· Buying versus building the persistence layer
· Concurrency, objects, and row locking
· Development language issues
· A potential development schedule
6.1 Buy Versus Build
Although this white paper is aimed at people who are building a persistence layer, the fact is that building and maintaining a persistence layer is a complex task. My advice is that you shouldn’t start the development of a persistence layer it if you can’t finish through. This includes the maintenance and support of the persistence layer once it is in place.
If you decide that you either can’t or don’t want to build a persistence layer then you should consider purchasing once. In my third book, Process Patterns (Ambler, 1998b), I go into detail about the concept of a feasibility study, which looks at the economic, technical, and operational feasibility of something. The basic idea is that your persistence layer should pay for itself, should be possible to build/buy, and should be possible to be supported and maintained over time (as indicated previously).
A feasibility study should look at the economic, technical, and operational feasibility of building/buying a persistence layer.
The good news is that there are a lot of good persistence products available on the market, and I have provided links to some of them at http://www.ambysoft.com/persistenceLayer.html to provide an initial basis for your search. Also, I have started, at least at a high level, a list of requirements for you in this document for your persistence layer. The first thing that you need to do is flesh them out and then prioritize them for your specific situation.
Hibernate in Action: "Hibernate in Action: Practical Object/Relational Mapping
Christian Bauer and Gavin King
April 2004, Softbound, 400 pages
Our price: $44.95
You can order this book from your bookstore
by using the ISBN and title listed above.
Hibernate in Action is both an introduction to the theoretical aspects of automated object/relational mapping and a practical guide to the use of Hibernate. The extensive example code implements an online auction application.
The book is divided into two parts. Part I discusses object persistence, the object/relational mismatch problem and emphasizes the importance of Plain Old Java Objects. It introduces Hibernate and explains the basic theoretical foundations of object/relational mapping.
Part II is dedicated to demonstrating more advanced ORM concepts and techniques, with practical examples using Hibernate. The impact of ORM upon application architecture and development processes is explored along with techniques for achieving high performance. Hibernate's developer toolset is demonstrated and best practices are recommended.
Writing your own mapping layer
It can be very tempting to write your own object-relational mapping. If fact, there are books and articles advocating this. The bottom line, however, is that unless you have a very simple mapping, it is a bad idea to write your own mapping layer.
I have talked with plenty of developers who tried writing a mapping layer. The result, although anecdotal, is universal. The mapping code, in each case, grew to be 30 to 40 percent of the code needed for the entire application. There are two problems that resulted from this. The first is that this is a lot of effort towards writing code that is not addressing the business problem that prompted the application development in the first place. The second is that, given the development models that show how code defects rise with total code, a significant number of additional defects appeared which, again, are not directly related to the business problem being addressed by the application development.
So, if you have a relational database and you want to use C++ or Java, by all means use an object-relational product. Writing a mapping layer is much harder than you might expect. Some of the reasons this is hard can be seen in the considerations for mapping:
For object-to-table mapping, see mapping objects to tables.
For table-to-object mapping, see mapping tables to objects.
Javalobby Forums - IBM VM (WebSphere)
Re: IBM VM (WebSphere) Posted: Nov 7, 2003 8:51 AM
IBM has had one for some time, but they do not make it easy for you to get it. I found my copy in an installation of the latest version of Websphere MQ. You should be able to download it for free. Look in the install directory and you should find JDK 1.4x. This is the version info I'm showing. There may be a newer version now.
java version '1.4.0'
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.0)
Classic VM (build 1.4.0, J2RE 1.4.0 IBM Windows 32 build cn140-20020902 (JIT enabled: jitc))
It obviously depends on your scenario, but for me I found the JRE to be comparable to the performance of SUN's 1.4x VM, and marginally slower than IBM's 1.3 VM.
Home - Enterprise Integration Patterns
Patterns and Best Practices for Enterprise Integration
This site is dedicated to making the design and implementation of integration solutions easier. Most solutions and approaches describe here are valid for most integration tools and standards such as IBM WebSphere MQ, TIBCO, Vitria, SeeBeyond, JMS, Microsoft Messaging, MSMQ, Web Services etc.
This site is maintained by Gregor Hohpe. I lead the Enterprise Integration practice for ThoughtWorks, an application development and systems integration firm. I hope you find this material useful. Feel free to contact me with suggestions or feedback. Also, my company happens to be pretty good at helping clients build integration solutions, so don't hesitate to contact me if you are interested in our services.
POPFile - Automatic Email Classification
Excerpt from Software Development Newsletter:
Bayesian analysis is a relatively new technique for fighting spam. It also happens to be useful for classifying mail in general. To get Bayesian analysis to work, you must first train the system by classifying some messages on your own. After a while, the system gathers enough data about how you classify messages that it can essentially work untouched. Of course, the more you train it, the better it gets. This is why PopFile provides a simple, Web-based user interface that allows you to classify and reclassify messages whenever you want.
Like most software based on Bayesian analysis, PopFile isn't perfect -- it does misclassify messages from time to time. But it gets better over time as you train it on more messages. For the first week, during the "training period," I found myself using the interface a few times a day to reclassify messages. Now I go into the interface maybe once a week. It's a minor hassle, but the time I save overall is well worth it. If you, too, are tired of being managed by your inbox, PopFile just might be the answer.
Software Modeling on Whiteboards
by Scott W. Ambler, Copyright 2003
Whiteboards are my favorite modeling tool, and I stand by my claim that they are the modeling tool with the greatest install base worldwide. In fact throughout this web site you will see many whiteboard sketches, which is nice for an online article or even a book but can are they acceptable for real-world development? My experience is yes. Here’s how I make it work for me.
BT Exact > White papers
The future looks ever more exciting each year. Technology development is still accelerating and an increasing number of new fields are being created and exploding new ideas onto the market.
The future is a hard to predict but here, at BTexact, we have always believed that inventing the future is the best way to create it. One thing is certain in the distant future - the world will be a very different place. One tool we produce to help alleviate uncertainty about the future is our BTexact technology timeline.
The timeline is produced mainly to give BT researchers and managers a view of what the operating environment is likely to contain at any future date, so that our products and services can be better targeted to the needs of the customer. But we have also found that many people outside the company find it useful too, so we always try to make it as free of technical jargon as possible. What must be remembered by anyone preparing for the future is that technology change isn't very important in itself. What matters is what this change enables or destroys.
The intention of the timeline is to illustrate the potential lying ahead for beneficial technologies. Not all will be successful in the marketplace. Some won't ever be implemented at all, but as the rest come on stream, our lives will improve in many ways. We will have more variety of entertainment, better health, greater wealth, and probably better social well-being. We will have more time saving devices and ultra-smart computers will do most of our admin, but the future world will offer so much more opportunity to be productively and socially busy that we will have even less free time than today! If we think of this as living life to the full rather than in terms of stress, then the future looks good.
We hope you enjoy reading our timeline as much as we enjoyed producing it.
View the full white paper (PDF)
MDA code generator framework XCoder is now open source
(originally posted By: Constantin Szallies on October 24, 2003 @ 04:54 PM)
XCoder is an extensible model transformation and code generation framework. The framework is itself modelled with UML and generated using the standard UML to Java model transformation included in the distribution.
Currently supported input meta models: UML via XMI
Currently supported output meta models: Java, C# and C++
The distribution also includes a standard transformation from the UML to an EJB meta model.
The source is available open source under http://sourceforge.net/projects/xcoder, whitepapers are available under http://www.liantis.com/Downloads/index.html
XML and Java technologies: Data binding with Castor
A look at XML data binding for Java using the open source Castor project
XML data binding for Java is a powerful alternative to XML document models for applications concerned mainly with the data content of documents. In this article, enterprise Java expert Dennis Sosnoski introduces data binding and discusses what makes it so appealing. He then shows readers how to handle increasingly complex documents using the open source Castor framework for Java data binding. If your application cares more about XML as data than as documents, you'll want to find out about this easy and efficient way of handling XML and Java technologies.
XML and Java technologies: Data binding, Part 1: Code generation approaches -- JAXB and more
Generating data classes from DTDs or schemas
Enterprise Java expert Dennis Sosnoski looks at several XML data binding approaches using code generation from W3C XML Schema or DTD grammars for XML documents. He starts out with the long-awaited JAXB standard now nearing release through the Java Community Process (JCP), then summarizes some of the other frameworks that are currently available. Finally, he discusses how and when you can best apply code generation from a grammar in your applications.
XML and Java technologies: Data binding, Part 2: Performance
After kicking the tires in Part 1, take data binding frameworks out for a test drive
Enterprise Java expert Dennis Sosnoski checks out the speed and memory usage of several frameworks for XML data binding in Java. These include all the code generation approaches discussed in Part 1, the Castor mapped binding approach discussed in an earlier article, and a surprise new entry in the race. If you're working with XML in your Java applications you'll want to learn how these data binding approaches stack up!
XML and Java technologies: Data binding Part 3: JiBX architecture
Tests in Part 2 showed JiBX delivers great performance -- here's how!
Enterprise Java technology expert Dennis Sosnoski gives a guided tour of his JiBX framework for XML data binding in Java applications. After introducing the current frameworks in Part 1 and comparing performance in Part 2, he now delves into the details of the JiBX design that led to both great performance and extreme flexibility for mapping between XML and Java objects. How does JiBX do it? The keys are in the internal structure...
XML and Java technologies: Data Binding Part 4: JiBX Usage
Part 3 described the JiBX internal structure -- now find out how you actually use JiBX for flexible binding of Java objects to XML
JiBX lead developer Dennis Sosnoski shows you how to work with his new framework for XML data binding in Java applications. With the binding definitions used by JiBX, you control virtually all aspects of marshalling and unmarshalling, including handling structural differences between your XML documents.
Creative Science Systems Schema2Java Compiler
Schema2Java™ Compiler generates XML Schemas into Java classes, which can then be used by developers in applications that process XML documents. Schema2Java™ Compiler enables XML processing with the convenience and ease of Java objects. Schema2Java™ Compiler not only makes XML processing simple and easy, it increased the productivity of Web Services development by as much as 80%. Why? Because XML document processing is an integral aspect of Web Services.
Schema2Java™ Compiler enables fast, error-free, and efficient development of Web Services applications.
Martin Fowler Bliki: EnterpriseArchitecture
Just recently I've picked up a couple of bad reviews on Amazon for P of EAA because there is nothing in the book about enterprise architecture. Of course there's a good reason for that - the book is about enterprise application architecture, that is how to design enterprise applications. Enterprise architecture is a different topic, how to organize multiple applications in an enterprise into a coherent whole.
As it turns out, I can get pretty cynical about enterprise architecture. This cynicism comes from what seems to be the common life-cycle of enterprise architecture initiatives. Usually they begin in a blaze of glory and attention as the IT group launches a major initiative that will be bring synergy, reuse, and all the other benefits that can come by breaking down the stovepipes of application islands (and other suitable analogies). Two or three years later, not much has been done and the enterprise architecture group isn't getting their phone calls returned. A year or two after that and the initiative quietly dies, but soon enough another one starts and the boom and bust cycle begins again.
So why does this cycle happen with such regularity? I think that most people involved in these initiatives would say the reason they fail is primarily due to politics - but what they often miss is that those political forces are inevitable. To succeed in these things means first recognizing the strength of those political forces.
The problem for central architecture groups is that they are driven by IT management, but the applications they are looking to organize are driven by business needs. If an application team is told to do work that doesn't benefit their application directly, but makes it easier to fit in the architecture, there's a natural reluctance to do it. Furthermore they have the ace card - the business sponsor. If the business sponsor is told the application will ship four months late in order to conform to the enterprise architectural plans, then they are motivated to back up the application team when they say no (spelled "we'll get around to it later"). Since the application is directly connected to providing business value, and the central architectural team isn't, the application team wins. These wins cause the enterprise architecture initiative to bust.
To avoid this the enterprise architecture initiative has to recognize and submit to the political realities.
Understand what the business value of any enterprise architectural initiative is.
Make sure that any work is supported by incremental short term gains in business value.
Minimize costs to the applications
A good way to think about this is that these initiatives should less about building an overarching plan for applications, and more about coming up with techniques to integrate applications in whatever way they are put together. (After all ApplicationBoundaries are primarily social constructs and they aren't likely to conform to anyone's forward plans.) This integration architecture should work with the minimum impact to application teams, so that teams can provide small pieces of functionality as the business value justifies it. I think you also need to focus on approaches that minimize coupling between applications, even if such approaches are less efficient than a more tightly coupled approach might be.
These reasons tend to lead me toward a messaging approach to integration. While it has its faults, it's something that can be applied with minimal impact to existing applications.
By the way, enterprise application architecture can have a big impact upon enterprise integration. Applications that are nicely layered, particularly with a good PresentationDomainSeparation, are much easier to stitch together because you can more easily expose the applications functionality through services. This isn't a cost to the application, because good layering makes the application easier to maintain as well. However too few application developers understand how to do PresentationDomainSeparation. One of the best things an integration group can do is to support education and training to help them to do this (an approach that's best supported if act like Architectus Oryzus rather than Architectus Reloadus). So in that sense my book has a lot to do with enterprise architecture.
XML Beans: relaxing the JDK 1.4 requirement
> -----Original Message-----
> From: Maurice_E_Sherman@Keane.Com
> Sent: Wednesday, October 08, 2003 7:42 AM
> To: email@example.com
> Subject: relaxing the JDK 1.4 requirement
> XMLBeans seems ideal for my project, but I'm constrained to
> using a 1.3.1 JDK in deployment, but not in development.
> Anyone have any advice on the feasibility of using XMLBeans
> in a JDK 1.3.1 deployment environment.
The goal of the Scarab project is to build an Artifact tracking system that has the following features:
- A full feature set similar to those found in other Artifact tracking systems: data entry, queries, reports, notifications to interested parties, collaborative accumulation of comments, dependency tracking
- In addition to the standard features, Scarab has fully customizable and unlimited numbers of Modules (your various projects), Artifact types (Defect, Enhancement, Requirement, etc), Attributes (Operating System, Status, Priority, etc), Attribute options (P1, P2, P3) which can all be defined on a per Module basis so that each of your modules is configured for your specific tracking requirements.
- Built using Java Servlet technology for speed, scalability, maintainability, and ease of installation.
- Import/Export ability via XML allowing for easy migration from other systems (e.g. Bugzilla).
- Modular code design that allows manageable modifications of existing and new features over time.
- Fully customizable through a set of administrative pages.
- Easily modified UI look and feel.
- Can be integrated into larger systems by re-implementing key interfaces.
Omniformat image conversion freeware
OmniFormat is a free document conversion utility which allows dynamic conversion and image manipulation of over 75 file formats including HTML, DOC, XLS, WPD, PDF, JPG, GIF, TIF, PNG, PCX, PPT, PS, TXT, Photo CD, FAX and MPEG. For a full list of supported formats please see our FAQ page.
XMLBeans: The easiest way to use XML in Java
XMLBeans is a breakthrough technology from BEA that makes it incredibly easy for developers to access and manipulate XML data and documents in Java. For the first time, developers can gain a familiar and convenient Java object-based view of their XML data without losing access to the richness of the original, native XML structure and schema.
XMLBeans is based on an efficient XML token stream that provides easy navigation of XML data using cursors. This cursor interface is available for any XML document. If you have an XML Schema description of your document, XMLBeans will also provide Java class 'views' of the data. These Java classes enable easy read/write access to XML information and enforce XML Schema constraints. Because these Java views are based on the preserved, underlying XML representation, XMLBeans always maintains full fidelity of the original XML, and no information is ever lost. So instead of having to choose between full access to XML data through time-consuming traditional APIs like SAX and DOM or convenient but incomplete binding schemes, XMLBeans provides the best of both worlds.
Important Note: XMLBeans requires J2SE 1.4
XMLBeans - Overview
XMLBeans is an XML-Java binding tool that uses XML Schema as a basis for generating Java classes to be used to easily access XML instance data. It was designed to provide both easy access to XML information via convenient Java classes as well as complete access to the underlying XML, combining the best of low-level, full access APIs like SAX and DOM with the convenience of Java binding.
Wired 8.04: Why the future doesn't need us.
Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species.
By Bill Joy
AARIA Autonomous Agents at Rock Island Arsenal (dead link?)
AARIA (Autonomous Agents at Rock Island Arsenal), is an ARPA-sponsored project designing an autonomous agent based factory scheduler at the Rock Island Arsenal. The project team is headed by Intelligent Automation, Inc. (Rockville, MD) and includes the University of Cincinnati, Industrial Technology Institute, and Flavors Technology, Inc. The agents, programmed in objective-C and running on a network of Pentium based computers under PDO (Portable Distributed Objects), will actively represent each step on the ladder of manufacturing a part: going from the customer, through the sales representative, engineers, manufacturing processes, and finally to the raw materials.
AN INTRODUCTION TO MARKOV CHAIN MONTE CARLO METHODS AND THEIR ACTUARIAL APPLICATIONS
This paper introduces the readers of the Proceedings to an important class of computer based simulation techniques known as Markov chain Monte Carlo (MCMC) methods. General properties characterizing these methods will be discussed, but the main emphasis will be placed on one MCMC method known as the Gibbs sampler. The Gibbs sampler permits one to simulate realizations from complicated stochastic models in high dimensions by making use of the model’s associated full conditional distributions, which will generally have a much simpler and more manageable form. In its most extreme version, the Gibbs sampler reduces the analysis of a complicated multivariate stochastic model to the consideration of that model’s associated univariate full conditional distributions.
MULTI-AGENT MARKET MODELING OF FOREIGN EXCHANGE RATES
A market mechanism is basically driven by a superposition of decisions of many agents optimizing their profit. The macroeconomic price dynamic is a consequence of the cumulated excess demand/supply created on this micro level. The behavior analysis of a small number of agents is well understood through the game theory. In case of a large number of agents one may use the limiting case that an individual agent does not have an influence on the market, which allows the aggregation of agents by statistic methods. In contrast to this restriction, we can omit the assumption of an atomic market structure, if we model the market through a multi-agent approach.
The contribution of the mathematical theory of neural networks to the market price formation is mostly seen on the econometric side: neural networks allow the fitting of high dimensional nonlinear dynamic models. Furthermore, in our opinion, there is a close relationship between economics and the modeling ability of neural networks because a neuron can be interpreted as a simple model of decision making. With this in mind, a neural network models the interaction of many decisions and, hence, can be interpreted as the price formation mechanism of a market.
JGroups - The JGroups Project
JGroups is a toolkit for reliable multicast communication.
(Note that this doesn't necessarily mean IP Multicast, JGroups can also use transports such as TCP).
It can be used to create groups of processes whose members can send messages to each other. The main features include
- Group creation and deletion. Group members can be spread across LANs or WANs
- Joining and leaving of groups
- Membership detection and notification about joined/left/crashed members
- Detection and removal of crashed members
- Sending and receiving of member-to-group messages (point-to-multipoint)
- Sending and receiving of member-to-member messages (point-to-point)
RAIDb: Redundant Array of Inexpensive Databases
Abstract: Clusters of workstations become more and more popular to power data server applications such as large scale Web sites or e-Commerce applications. There has been much research on scaling the front tiers (web servers and application servers) using clusters, but data-bases usually remain on large dedicated SMP machines. In this paper, we address database performance scalability and high availability using clusters of commodity hardware. Our approach
consists of studying different replication and partitioning strategies to achieve various degree of performance and fault tolerance.
We propose the concept of Redundant Array of Inexpensive Databases (RAIDb). RAIDb is to databases what RAID is to disks. RAIDb aims at providing better performance and fault tolerance
than a single database, at low cost, by combining multiple database instances into an array of databases. Like RAID, we define different RAIDb levels that provide various cost/performance/fault tolerance tradeoffs. RAIDb-0 features full partitioning, RAIDb-1 offers
full replication and RAIDb-2 introduces an intermediate solution called partial replication, in which the user can define the degree of replication of each database table.
We present a Java implementation of RAIDb called Clustered JDBC or C-JDBC. C-JDBC achieves both database performance scalability and high availability at the middleware level without changing existing applications. We show, using the TPC-W benchmark, that RAIDb-2
can offer better performance scalability (up to 25%) than traditional approaches by allowing fine-grain control on replication. Distributing and restricting the replication of frequently written tables to a small set of backends reduces I/O usage and improves CPU utilization of each cluster node.
DISTRIBUTED COMPUTING ARCHITECTURE/E-BUSINESS ADVISORY SERVICE
By James Odell et al.
Agents: Technology and Usage (Part 1)
Agents: Complex Systems (Part 2)
Vizional > Solutions > Adaptive Demand Management: "The Vizional Adaptive Demand Management application performs three major functions: synchronize independent demand, propagate dependent demand upstream, and enable complete order fulfillment. The application's three modules allow companies to intelligently and rapidly analyze, manage and shape customer demand and orders across the entire supply network. The Vizional Adaptive Demand Management application utilizes powerful, adaptive agents distributed across the supply network that monitor real-time demand information, detect variations in demand, coordinate relevant interactions and resolutions, and then dynamically adjust forecasts to the actual consumption signals. This results in an adaptive supply network that continuously synchronizes demand with supply, intelligently realigning plans and processes based on the specific business environment."
The TAC Supply Chain Management Game
Supply chain management is concerned with planning and coordinating the activities of organizations across the supply chain, from raw material procurement to finished goods delivery. In today’s global economy, effective supply chain management is vital to the competitiveness of manufacturing enterprises as it directly impacts their ability to meet changing market demands in a timely and cost effective manner. With annual worldwide supply chain transactions in trillions of dollars, the potential impact of performance improvements is tremendous. While today’s supply chains are essentially static, relying on long-term relationships among key trading partners, more flexible and dynamic practices offer the prospect of better matches between suppliers and customers as market conditions change. Adoption of such practices has however proven elusive, due to the complexity of many supply chain relationships and the difficulty in effectively supporting more dynamic trading practices. TAC SCM was designed to capture many of the challenges involved in supporting dynamic supply chain practices, while keeping the rules of the game simple enough to entice a large number of competitors to submit entries. The game has been designed jointly by a team of researchers from the e-Supply Chain Management Lab at Carnegie Mellon University and the Swedish Institute of Computer Science (SICS).
Creating intelligence in the supply chain
(Computerworld Singapore - A Computerworld Mid-year Special: Out of the dark , 19 - 25 July 2002)
In the last two decades, the use of IT in the manufacturing sector was about the optimisation of production through systems such as MRP (materials resource planning) and ERP (enterprise resource planning). Over the next five years, the focus is expected to shift to the optimisation of global networks through the use of expert systems."
Agent-Oriented Supply-Chain Management
Abstract. The supply chain is a worldwide network of suppliers, factories, warehouses, distribution centers, and
retailers through which raw materials are acquired, transformed, and delivered to customers. In recent years, a new software architecture for managing the supply chain at the tactical and operational levels has emerged. It views the supply chain as composed of a set of intelligent software agents, each responsible for one or more activities in the supply chain and each interacting with other agents in the planning and execution of their responsibilities. This paper investigates issues and presents solutions for the construction of such an agent-oriented software architecture. The approach relies on the use of an agent building shell, providing generic, reusable, and guaranteed components
and services for communicative-act-based communication, conversational coordination, role-based organization modeling, and others. Using these components, we show two nontrivial agent-based supply-chain architectures able to support complex cooperative work and the management of perturbation caused by stochastic events in the supply chain.
Agent-Based Modeling vs. Equation-Based Modeling:
A Case Study and Users’ Guide
Abstract. In many domains, agent-based system modeling competes with equation-based approaches that identify system variables and evaluate or integrate sets of equations relating these variables. The distinction has been of great interest in a project that applies agent-based modeling to industrial supply networks, since virtually all computer-based modeling of such networks up to this point has used system dynamics, an approach based on ordinary differential
equations (ODE’s). This paper summarizes the domain of supply networks and illustrates how they can be modeled both with agents and with equations. It summarizes the similarities and differences of these two classes of models, and develops criteria for selecting one or the other approach.
Strategic Analytics Inc.
Agent-based (?) simulators for forecasting consumer-based portfolios (primarily financial industry)
Forecast of Business Performance using an Agent-based Model and Its Application to a Decision Tree-Monte Carlo Business Valuation
Only abstract available now .... seems interesting?
A few portals with focus on supply chain management and/or forecasting. Need to look into these and see if there is any value to be found? Some (but not all) seem interesting ...
- IMRA - the The International Mass Retail Association
- SCOR - Supply-Chain Council
- CPFR - the Collaborative Planning, Forecasting and Replenishment Committee
- The Stanford Global Supply Chain Management forum
- ASCET - Achieving Supply Chain Excellence through Technology - Planning and Forecasting
- IBM Research - Merchandise & Inventory Management Group
- INFORMS - the Institute for Operations Research and the Management Sciences
- CIO.com - Supply Chain Management Research Center
- Supply Chain Systems Magazine
OptimalJ - Package Structure Analysis Tool
The OptimalJ - Package Structure Analysis Tool is a new tool for analyzing and improving the modular structure of Java programs, leading to product flexibility, comprehensibility and reduced development time.
The Package Structure Analysis Tool takes Java sources as input and:
Visualizes the dependencies between packages and classes with UML class diagrams.
Detects cycles in the dependency graph.
Recovers an intended architectural layering from a polluted implementation.
Suggests which dependencies should be removed to improve the structure.
Allows refactoring of the source model.
Immediately shows the effect of the refactoring on the dependency structure.
Allows source code to be verified against a design model.
Allows the refactoring to be applied to the source code (not in the free edition).
Read more about package design, layering and metrics.
A free edition of the Package Structure Analysis Tool can be downloaded (login required). This edition is fully functional, except that the functionality to refactor the source has been disabled. The download is about 1.3 MB. A full version is part of the OptimalJ suite.
PMD - PMD
Cool tool to analyze code?
Hmm, makes more sense to manually include the following line in <head> section of Blogger template:
Should launch links such as this in a new window?
"Restarting" log at http://w1.859.telia.com/~u85917743/weblog/p2rBlogger.html