tag:blogger.com,1999:blog-357695612024-02-08T11:35:16.342-08:00neo-sanskritThomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.comBlogger22125tag:blogger.com,1999:blog-35769561.post-83133450907139841692011-12-27T23:40:00.000-08:002011-12-27T23:51:59.557-08:00Goodbye BlogspotThis serves a reminder to whoever might actually read this blog (As well as myself) that this will be the last post on this blog. Blogspot kind of stinks and I am amazed I have lasted this long here. Future posts will be found at <a href='http://neo-sanskrit.tumblr.com'> Neo-sanskrit on tumblr</a>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-1045844357603672212011-11-23T08:58:00.000-08:002011-11-23T08:58:56.658-08:00Rails Benchmarking Reloaded<p>While evaluating caching strategies in Rails 3.1, I found existing articles comparing rails cache store backends to be quite lacking and/or outdated. The last article I could find <a href='http://blog.endpoint.com/2011/07/raw-caching-performance-in-rubyrails.html'>compares file_store to mem_cache_store.</a> Given that <a href='http://awesomerails.wordpress.com/2011/08/23/rails-3-memcached-session-store/'>mem_cache_store is being replaced by "Dalli"</a>, it seems that existing benchmarks comparing the available options for rails cache backends are lacking in their ability to provide value with respect to the options available today. </p><ul> <li>File Store</li>
<li>Memcached Store</li>
<li>Dalli</li>
<li>Mongo Store</li>
<li>Redis Store</li>
</ul><b>Test</b><br />
<script src="https://gist.github.com/1389171.js?file=cachemark.rake"></script><br />
<b>Results</b><br />
<iframe width='500' height='300' frameborder='0' src='https://docs.google.com/spreadsheet/pub?hl=en_US&hl=en_US&key=0AssavDj05iJVdHRRam1GY3ZGYUxETUZ1M1BmT3JyMnc&single=true&gid=1&output=html&widget=true'></iframe><br />
<br />
<p>Though it looks like mongo-store demonstrates the best overall performance, it should be noted that a mongo server is unlikely to be used solely for caching (the same applies to redis), it is likely that non-caching related queries will be running concurrently on a mongo/redis server which could affect the suitability of these benchkmarks.<br />
</p>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com1tag:blogger.com,1999:blog-35769561.post-92114843449967103982011-09-29T23:15:00.000-07:002011-09-29T23:15:48.277-07:00Another Metadata Framework<h3>The quest for consistent metadata storage</h3><br />
<p>I sought to create an web application to access my mp3's and photos remotely<br />
I wanted a new way to store information, every file should be its own record<br />
The database should be the files themselves and I should just need to maintain a directory<br />
</p><br />
<p>I needed a consistent metadata framework:<br />
<ul> <li>Id3 was archaic and pretty much specifically for mp3s</li>
<li>Exif was just for photos</li>
</ul><br />
</p><h3>Enter XMP</h3><p>I found XMP which could tag mp4 files and photos both and had been in<br />
development since 2005 by adobe. It used XML and was capable of storing<br />
any type of information in any file. <br />
</p><p>Code was available in C++/Java and I immediately undertook the task<br />
of writing a native extension using Rice. I ran into environment issues<br />
and explored who I might implement the specification manually. </p><p>I checked out the <a href="http://partners.adobe.com/public/developer/en/xmp/sdk/XMPspecification.pdf">specification and o_O</a>, I could smell the stank of corporate governance<br />
IPTC schemas and namespaces everywhere, the specification had rigid expectations of using a specific<br />
schema and was full of all the nastiness of when XML came to be owned and defined by large corporate bodies.</p><p>I took the best parts of the idea<br />
<ul><li>Add metadata to any file</li>
<li>Implement a special marker to identify an XMP segment</li>
</ul><br/><br />
<br />
And added my own ideas<br />
<ul><li>Use <a href='http://bsonspec.org/'>BSON</a> (10gen's Binary JSON format) <br />
<li>Provide support for JSON schemas and namespaces </li><br />
</ul><br/>
Of course there is no existing specification for schemas and namespaces, so the
tests' namespaces refer to a google group where namespace implementation is being discussed.
</p><h3>Metahash</h3><script src="https://gist.github.com/1252835.js?file=gistfile1.rb"></script>
<p>Check it out at <a href='http://www.github.com/vajrapani666/metahash'>Github</a>
</p>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-1602795873399055822011-08-29T23:57:00.000-07:002011-08-29T23:57:13.628-07:00Thoughts on version-control and agricultureCould we use version control methods and techniques to create a "physical strain repository" for creating a distributed workflow for genetic selection which could be licensed under open-source and protected from large corporate machines like Monsanto via GNU or similar licenses? <br />
<br />
While watching Food, Inc. the other night, I felt sorry for the soybean farmers who were dominated and regulated by Monsanto's patents. Monsanto produces genetically-modified seeds with extremely favorable characteristics and possesses a patent on their strains. Much has been written on the "<a href="http://c4sif.org/2011/06/the-evil-of-patenting-food-and-seeds/">evils of patenting food and seeds</a>". I couldn't help but think about how Monsanto's reign over the seed industry is similar to the domination of Microsoft in the software industry of the 90s. Though their monopoly's fall had more to do with the fact that since their software was so ubiquitous it could not reap the benefits of competition. I feel that the rise of open-source software in the early millenium had a large part to play in cultivating a revolution against the corporate machine. Open-source software's ability to flourish is due in large part to the internet and its ability to dissolve geographic boundaries. The selection of seeds from generation to generation has largely been a locally-based operation for millenia. It is not readily possible for a farmer in georgia to view strains of farmers in missouri, there is no coordination for the civilization to organize mass selection in an effective manner.<br />
<br />
<br />
Could we imagine a world where there exists a physical repository with a protocol for checking-in, checking-out , forking strains of seed? Like a github for organize mass artificial selection? Could we standardize a method of describing quantifiable measurements of seed quality and strain strength and index all forks and repositories? Couldn't we even mirror the actual evolution of a genome with source control? Aren't you in fact, a fork?<br />
<br />
Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-40213756873713308402011-06-04T19:12:00.000-07:002011-06-04T20:19:34.713-07:00Take a look at that gem!<p>Attempting to upload an image to the refinerycms system yielded a stack trace returned to the user. In this case, refinery's images_controller is picking up an error in dragonfly. </p><br /><br /><p>When we try to upload an image in refinery, we get</p><br /><br /><script src="https://gist.github.com/1008583.js?file=stack_trace"></script><br /><br /><p>Let's check out the top file</p><br /><br /><script src="https://gist.github.com/1008583.js?file=file_command_analyser.rb"></script><br /><p>So it appears that the error has to do with the IO.popen. Since we know we wouldn't need that call if "use_filesystem" were true, and since line 9 suggests there is a configuration directive for this setting somewhere. We should try to find it. </p><br /><br /><p>So we go down the stack trace to the last known point the execution was in another gem. It turns out to be images_controller in the refinerycms gem. </p><br /><br /><p>Knowing the name of the controller , I tried some bash-fu and was presently suprised when it worked!</p><br /><br /><script src="https://gist.github.com/1008583.js?file=commands0"></script><br /><br /><p>None the less. There did not appear to be any configuration in that file. I went to the refinery gem root's directory and did a "grep -R dragonfly ." to flesh out any config files. I noticed "lib/refinerycms-images.rb."</p><br /><br /><script src="https://gist.github.com/1008583.js?file=commands1"></script><br /><p><br />We check out the file and see the Dragonfly app initialization at line 22. We google around for the Dragonfly docs looking for a reference to where exactly the "use_filesystem" configuration directive must be set. Our search lands us on docs for <a href="http://markevans.github.com/dragonfly/file.Analysers.html">Dragonfly::Analysis::FileCommandAnalyser</a><br /></p><br /><p>An example for the config is referenced which includes the directive we are looking for. </p><br /><script src="https://gist.github.com/1008583.js?file=dragonfly_example"></script><br /><p>We then modify the source of lib-refinerycms.rb to include the modifications to the analyzer config. </p><br /><br /><script src="https://gist.github.com/1008583.js?file=refinery_change"></script><br /><br /><p>We attempt the image upload again and the upload succeeds. Now, how do I get involved in the refinerycms repo to discuss the changes with the leads? Something like this? <a href="https://github.com/resolve/refinerycms/pull/738">https://github.com/resolve/refinerycms/pull/73</a>8</p><br /><br /><p>Right?</p>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-48983825903512139662011-05-25T06:54:00.000-07:002011-05-25T07:25:59.499-07:00MySQL Proxy<p>Robust application frameworks will include the ability to log all database activity. There are cases where you may encounter a situation where access to this functionality is limited, obscure or completely absent. This is especially the case with Pentaho where reports may often fail with no explanation and nothing but a long stack trace. In these cases, it is helpful to implement a way to implement logging on the database side. </p><br /><br /><b>MySQL General Log</b><br /><p><br />There is of course the ability to <a href="http://dev.mysql.com/doc/refman/5.1/en/query-log.html">turn on general logging in mysql</a> through the --general-log and --general-log-file options. There are cases where this isn't very helpful. Especially if you are working with a development environment that has multiple developers and applications where the volume of queries from applications other than the one you are working with causes this method to be cumbersome.<br /></p><br /><br /><b>mysql-proxy</b><br /><p><br /><a href="http://www.oreillynet.com/pub/a/databases/2007/07/12/getting-started-with-mysql-proxy.html">mysql-proxy</a> is a lua-based framework for intercepting and manipulating communication between a mysql client and server. It is capable of rewriting queries on-the-fly as well as rewriting result sets on the fly. In this case we can use it for auditing the queries from pentaho and its mysql-connection. <br /></p><br /><p>Mysql-proxy scripts are written in lua and are passed using --proxy-lua-script=. The server you are proxying to is specified by --proxy-backend-addresses=<ip:port>. The implementation of the log itself is rather simple. This implementation prints to STDOUT. </p><br /><script src="https://gist.github.com/991028.js?file=mysql_proxy.lua"></script><br /><br /><p>The implementation simply involves creating a new JNDI through the Pentaho administration console and setting the host to server running the proxy. (The port for mysql-proxy defaults to 4040). The credentials passed to mysql-proxy are passed on to the backend server. Once the JNDI is setup, implementing the proxy is only a matter of changing the data source for the report being debugged to the new JNDI.</p>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-90498308416178200352010-11-24T09:40:00.000-08:002010-11-24T09:52:34.663-08:00Best of both worlds. Modifying source/configure options for rpms<p>RPM's are great. They let you keep track of what's installed, dependencies and manages the removal of packages. In 99% of cases the RPM works great. In 1% of cases you may run across a bug in the package where the widely accepted solution is to remove or add an extra compile flag to fix the issue.</p><br /><br /><ol><br /> <li> Stop any services using the rpm</li><br /> <li> Uninstall the rpm</li><br /> <li> Install the rpm but do not confirm. <br/>In the rpm output before the confirm it will tell you which repo the package is at.</li><br /> <li> Read the baseurl from the appropriate repo config file in /etc/yum.repos.d </li><br /> <li> Create a tmp/working directory</li><br /><li> Go to that baseurl and use wget to get the somepkg.src.rpm package</li><br /> <li> rpm -i /path/to/src.rpm</li><br /> <li> cd /usr/src/redhat/ </li><br /> <li> To edit config flags, modify /usr/src/redhat/SPECS/somepkg.spec</li><br /> <li> Rebuild the rpm ( rpmbuild -bb /usr/src/redhat/SPECS/somepkg.spec) (You may need to install some *devel packages)</li><br /> <li> The RPM will be located in /usr/src/redhat/RPMS/{arch} .. where {arch} is the architecture for your machine (usually i386)</li><br /><br /><br /><br /> <br /></ol>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-44858696668374450572010-05-04T09:28:00.001-07:002010-05-04T09:29:42.587-07:00Check Active Directory Group Membership in Batch Files<pre><br />echo off<br />net group "domain admins" /domain | find "%username%" > nul<br />if errorlevel 1 goto notadmin<br />goto admin<br />:admin<br /> echo "these action will be performed if the current user is in the group"<br /> goto quit<br />:notadmin<br /> echo "notadmin"<br />:quit<br /><br /></pre>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-2540153196407244462009-11-10T20:03:00.000-08:002009-11-10T20:32:47.315-08:00Calculation of PI<a href="http://twitpic.com/p2qf7" title="http://www.reddit.com/r/programming/comments/9uxjo/dear_progg... on Twitpic"><img src="http://twitpic.com/show/thumb/p2qf7.jpg" alt="http://www.reddit.com/r/programming/comments/9uxjo/dear_progg... on Twitpic" height="150" width="150" /></a><br /><p>Today I was drawing answers to a quiz using boxes that stood for binary representations of the answers and realized that any regular polygon is composed of triangles. I thought back to <a href='http://www.reddit.com/r/programming/comments/9uxjo/dear_proggit_imagine_that_you_must_devise_a/ '>a question I saw on reddit where someone had challeneged the readers to determine the radius of a circle without knowing what PI was. </a><br /> I figured this could be a way to find the perimeter of any polygon using its "radius". Knowing that PI is the ratio of the circumference to the diameter , a polygon with the number of sides approaching infinity will have a ratio of perimeter to diameter approximately equal to PI. </p><br /><p>I started with a square. Its composed of 4 triangles and I used the law of sines to find the perimeter. The angle of the center is 360/4 (where 4 is n for squares being 4 sided) then other 2 angles are (180-(360/4))/2 or half the remaining degrees left in the triangle. If the radius is 1 then the perimeter of the square in terms of the law of sines is (sin(90)*4)/sin(135). If we continue this for a hexagon we find that the perimeter of a hexagon with radius 1 is 6. (sin(60)*6)/sin(60)).<br /></p><br /><p><br />We can take the limit of this function as the number of sides approaches infinity with the radius of 1. We take the result and divide it by the diameter of the "circle" which would be 2.</p><br /><p><br />Here is the resulting code.<br /></p><pre><br /> dr=lambda{|x| x*(Math::PI/180)} #degrees to radians (cuz Math.sin won't take degrees)<br /> p=lambda{|r,n| (r*Math.sin(dr[360.0/n])*n)/Math.sin(dr[(180-(360.0/n))/2.0])}<br />pitime=lambda{|r|<br /> last=0<br /> 4.upto(1.0/0) {|x| #from the perimeter of a square to infinity<br /> y=p[r,x]/(r*2) #calculate ratio of perimeter to diameter<br /> break if y==last<br /> last=y<br /> }<br /> last<br /> }<br /> puts pitime[1.0]<br /> >> 3.14159265341633<br /></pre><br />(Took 172599 iterations in 1.5seconds to get that number)Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-22394101387172478722009-04-17T13:05:00.000-07:002009-04-17T13:16:59.084-07:00How to implement interactive breakpoint - like functionality.When calling a script from irb/command line. You can use this snippet to take statements from stdin and evaluate them in the current context just like IRB. Use qq to stop. <br /><br /><pre><br /><br />if options[:debug]<br /> STDOUT << ">> "<br /> while (cmd=gets.chomp)!="qq"<br /> puts eval(cmd).inspect<br /> STDOUT << ">> "<br /> end<br />end<br /></pre>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-37211246761928769202009-03-05T18:11:00.000-08:002009-03-05T18:26:04.067-08:00Limits and Derivatives and Ruby<p>I read this article about Lisp: http://funcall.blogspot.com/2009/03/not-lisp-again.html<br />I wanted to try doing the same in Ruby.<br />I didn't like his setting DX as a constant close to 0 to shortcut not creating a function to figure the limit. </p><br /><br /><pre style='background-color:#333;color:#0f0'><br />lim=lambda{|f,h|<br /> y=h<br /> 1.upto(1.0/0) {|p| ##to infinity<br /> x=h+(1.0/10)**p ##increase how close x is to h 1.0,0.1,0.001 etc..<br /> fx=f[x] ##get f(x)<br /> break if y==fx ##if we start getting the same val, stop<br /> y=fx ##store the last val<br /> }<br /> y ##return where we stopped<br />}<br />ddx=lambda {|f|<br /> lambda {|x|<br /> lim[lambda{|dx| ##the limit of the derivative function<br /> (f[x+dx]-f[x])/dx<br /> },0] #as h approaches 0<br /> }<br />}<br /></pre><br /><br />Here are some results<br />(Infinite limits) http://www.cliffsnotes.com/WileyCDA/CliffsReviewTopic/Infinite-Limits.topicArticleId-39909,articleId-39873.html<br /><pre style='background:#333;color:#0f0;'><br />irb(main):319:0> g=lambda{|x| 1/(x**2)}<br />=> #<Proc:0x0421a4b8@(irb):319><br />irb(main):320:0> lim[g,0]<br />=> Infinity<br />irb(main):321:0> g=lambda{|x| (1/(x**2))-(1/(x**3))}<br />=> #<Proc:0x04209398@(irb):321><br />irb(main):322:0> lim[g,0]<br />=> -Infinity<br /></pre><br />Derivatives<br /><pre style='background:#333;color: #0f0;'><br />irb(main):326:0> g=lambda{|x| x**3}<br />=> #<Proc:0x04316e34@(irb):326><br />irb(main):327:0> ddx[g][2]<br />=> 12.0000009928844<br /></pre><br /><br /><p>I tried for hours to get the limit to resolve to a nice integer... but could never quite get it to be more precise. I did this because i googled "derivatives calculus ruby" and couldn't find anything about this. Why not?</p>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-61331448993388681862008-12-21T18:22:00.000-08:002008-12-21T18:43:37.183-08:00Upgrading Server to Ubuntu Hardy Heron LTS from Dapper Drake LTS<p>My company hosts several rails applications. For the ones in high demand - we use mongrel_cluster with nginx. The only problem is ... we use apache for everything else. So we proxy pass requests into nginx from apache. That seemed so redundant that I decided to get rid of nginx and use mod_proxy_balancer instead.</p><br /><br /><p>On 6.06 this turned out to be much harder than it seemed. Essentially proxy_balancer.so did not exist in /usr/lib/apache2/modules .. I would have to compile it with apxs to get it into the installation. I found out that apache 2.2 came with proxy_balancer but when I tried to update the apache package ubuntu said it was already the newest version. I knew this meant I may have to consider an upgrade to the next LTS. Beyond using mod_proxy_balancer I had been trying to get "Phusion Passenger" to work for over a month. (I had to become very familiar with httpd.h and mod_passenger.c to get it to even compile). As of that point I still had no way of serving up rails applications from apache without using Proxy Pass. </p><br /><br /><p>It was late on Saturday night and I had the whole weekend to fix anything that broke so I felt pretty confident that everything should be fine. </p><br /><br /><p>I did the commands.</p><br /><br /><pre><br />#sudo su<br />#aptitude update<br />#aptitude upgrade<br />#aptitude dist-upgrade<br />#aptitude install update-manager-core<br />do-release-upgrade<br /></pre><br /><br /><p><br /><br />The upgrade was to be 287mb and take several hours. I pressed the "y" key and started browsing reddit on my laptop. <br />Through the installation I was asked what to do about configuration file conflicts between packages and my own custom versions. There were many times where I honestly didn't care because I didn't even know certain things were still installed. ldap.conf? hylafax.conf? I mean I played around with them .. thought I uninstalled those things. There were several obvious cases where I just kept my existing configs (my.cnf, apache2.cnf, php.ini etc)</p><br /><br /><p>The upgrade completed with an error message about /etc/fstab.pre-uuid already existing. I disregarded the error after googling the message for 10 minutes and finding nothing. Everything seemed fine.</p><br /><br /><p>I was delighted to finally get phusion passenger working and mod_balancer active. I took the liberty of installing about 10-15 packages I had experimented with but had no further use for. hylafax, bugzilla, otrs, auth-ldap-client etc... then I went home</p><br /><br /><b>The fallout</b><br /><br /><p>Later that night I went to show off some of the performance benchmarks to a friend and caught a page hanging. I pulled up my ssh terminal and tried to get in to see what was going on. <i>I Couldn't get in</i>! ! . </p><br /><br /><p>The next day I went on site to get on the server directly and see if I could get in. I entered every login and password I knew and it wouldn't even accept my username!. I followed instructions for manually resetting the passwords by going into recovery mode. I restarted the machine... none of the logins were checking out. I restarted again and looked at auth.log</p><br /><br /><pre><br />Dec 21 06:36:55 www nscd: nss_ldap: reconnecting to LDAP server (sleeping 1 seconds)...<br />Dec 21 06:36:56 www nscd: nss_ldap: could not connect to any LDAP server as (null) - Can't contact LDAP server<br />Dec 21 06:36:56 www nscd: nss_ldap: failed to bind to LDAP server ldap://127.0.0.1: Can't contact LDAP server<br />Dec 21 06:36:56 www nscd: nss_ldap: could not search LDAP server - Server is unavailable<br />Dec 21 06:37:01 www CRON[9390]: PAM unable to dlopen(/lib/security/pam_ldap.so)<br />Dec 21 06:37:01 www CRON[9390]: PAM [error: /lib/security/pam_ldap.so: cannot open shared object file: No such file or directory]<br />Dec 21 06:37:01 www CRON[9390]: PAM adding faulty module: /lib/security/pam_ldap.so<br />Dec 21 06:37:01 www CRON[9390]: pam_unix(cron:session): session opened for user root by (uid=0)<br />Dec 21 06:37:01 www CRON[9392]: PAM unable to dlopen(/lib/security/pam_ldap.so)<br />Dec 21 06:37:01 www CRON[9392]: PAM [error: /lib/security/pam_ldap.so: cannot open shared object file: No such file or directory]<br /></pre><br /><br /><p>It hit me like a ton of bricks. At one point we had another IT guy here who wanted to use ActiveDirectory to manage the users. I hated windows and microsoft for a variety of reasons and wanted to prove to him that I could provide a much easier to use system using linux and phpldapadmin. I installed LDAP ... integrated it into the system and got it running - and we never used it. Now I've removed auth-ldap-client and the authentication client depends on ldap to check if the user is in ldap. </p><br /><br /><p>I looked at /etc/pam.d/ and /etc/nsswitch.conf .. where I found references to ldap in /etc .. I also found them in /etc/auth-client-config .. I read up on auth-client-config and found out that it can be used to control nsswitch and pam.d/* config files with profiles. I couldn't find a pre-ldap example so i modified the kerberos example and executed auth-client-config -a -p kerberos_example from the recovery prompt. And everything worked fine after that. </p><br /><br /><p>So please.. If you hear about a package, a project or the next biggest thing and you must install something on your machine. Consider doing it in a sandbox VM </p>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-21022661100986502142008-12-15T06:15:00.001-08:002008-12-15T06:17:00.457-08:00Use "less" instead of moreInstead of doing "cat somefile | more" try using "less somefile". Less is a spinoff of more which supports vim-style find (press "/" and type what you need to find) and can read sections of file from disk as opposed to reading the whole file in then displaying it. It's also less typing.Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com2tag:blogger.com,1999:blog-35769561.post-33878691640393202562008-12-13T13:37:00.000-08:002008-12-13T13:57:08.389-08:00MicrosoftSelling software isn't dead. Its just almost dead for many people. People saw that many great things were possible with computers. You could streamline small businesses, help groups collaborate, coordinate research and so on. I feel the world doesn't think its fair that Microsoft be the final word in software. People weren't comfortable having their aspirations run under one company's flag. <br />Software isn't dead, in the next few years you will see some of microsoft's largest markets turn their back on the giant. <br />Microsoft will be big in gaming. The Xbox 360 is amazing, well done. <br />Microsoft will be big in the business world.<br />The Microsoft "Personal Computer" will be a relic, a symbol of a dark time for all of human kind and the apogee of Microsoft's reign. <br />Home PCs will run on Apple operating systems (based on linux) or <a href="http://www.switched.com/2008/12/12/teacher-confiscates-linux-discs-claims-there-is-no-free-softwar/">Linux operating systems</a>.Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-78445013356575399882008-10-28T12:41:00.000-07:002008-10-28T12:45:31.885-07:00New Google Gadgets in GMailRan across some new features in Google Apps/Google Mail today. There are now mini-managers for docs/calendar that can be embedded into your G-Mail. <br /><br /><br /><a href="http://img207.imageshack.us/my.php?image=capturemaille5.jpg" target="_blank"><img src="http://img207.imageshack.us/img207/3213/capturemaille5.th.jpg" border="0" alt="Free Image Hosting at www.ImageShack.us" /></a><br /><br />I'm using the following plugins here.<br /><br /><ul><br /> <li>Right-side labels</li><br /> <li>Right-side chat</li><br /> <li>Navbar drag and drop</li><br /> <li>Google Calendar gadget</li><br /> <li>Google Docs gadget</li><br /></ul>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-31360233045642075472008-07-06T18:36:00.000-07:002008-07-06T18:51:26.137-07:00Custom Fax Cover Sheets & Hylafax & AvantFax<p>There is nothing more difficult than making a custom cover sheet for Hylafax. Of all the programming languages I have ever glanced at, Postscript is by far the most cryptic. I spent about 6 hours the other day trying to follow the instructions on the <a href="http://hylafax.sourceforge.net/howto/tweaking.php">official Hylafax tweaking page. </a>. I designed my cover sheet in photoshop and left the fields blank. Exported it as PNG and imported it into TGIF. I used TGIF's text tool and printed as EPS at first and tried using the script to make a coversheet.</p><br /><br /><i>That just didn't work. </i><br /><br /><p>I followed the instructions on <a href="http://madhaus.utcs.utoronto.ca/info/hylafax/FAQ/Q36.html">this page</a> for manual preparation of the raw PS file for faxcover. <br /><br />I spent 5 hours banging my head against the wall trying to figure out why when I replaced <br /><code><br />(XXXX-from-company) SH<br /></code><br /><br />with<br /><br /><code><br />/from-company IS<br /></code><br /><br />and added the macros to the top of my PS file , nothing was working right (Fields were not replaced)</p><br /><br /><p>After many hours I had slipped up and accidentally left some fields in the format seen in the PS file (the format the fields were in the raw PS printoff from TGIF) <br /><code><br />(XXXX-from-company) SH<br /></code></p><br /><p><br />Now why would that work? Apparently, All I needed to do was just place the fields with TGIF in that format and things would work. So I did that, tried again and everything worked perfectly. Now just to make that sheet the default cover sheet system wide as opposed to AvantFax's cover sheet. I copied the faxcover.ps file to /etc/hylafax and /var/spool/hylafax/etc and made sure their timestamps were synced (hylafax won't start otherwise).</p><br /><br /><p>I left the -C argument out of the sendfax command to use the system template and voila! Failure!. The resulting fax still had AvantFax's coversheet. I remembered that during installation there was a note about replacing hylafax's default coversheet. I looked into the AvantFax installer source code</p><br /><br /><code><br />(debian-install.sh from AvantFax 3.1.2)<br />mv $HYLADIR/bin/faxcover $HYLADIR/bin/faxcover.old<br />ln -s $INSTDIR/includes/faxcover.php $HYLADIR/bin/faxcover<br /></code><br /><br /><p>Shocked! AvantFax actually replaces the system wide faxcover program with a CLI PHP script! Their PHP script is made to mimic faxcover (but they conveniently forgot to update the man page for faxcover). I looked into the code, and its set to use AvantFax's cover sheet in the avantfax installation directory ($INSTDIR/includes/faxcover.ps). I replaced that file and Voila, My custom coversheet was working!</p>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com2tag:blogger.com,1999:blog-35769561.post-7847060484287317712008-06-22T20:32:00.000-07:002008-06-22T20:42:32.724-07:00Lazy Rails Development with Shell Function ShortcutsI love screencasts, and on every screencast I see that texmate editor I'm so jealous of. I work at a startup so it'll be a while before we'll dish the cash out for a mac, as much as I'd like one, its not in the cards right now.<br /><br />Once of the things I like the most about that editor is the ability to seemlessly jump between different files in your rails app. I find myself doing certain commands _constantly. <br /><br />One tip is to create shell functions for your most commonly used commands<br /><code><br />ruby script/server #rss<br />mongrel_rails start -d -e development -p 3000 #mrd_start<br />mongrel_rails stop #mrd_stop<br />vim app/controller/some_controller.rb #rvim c some<br />vim app/models/post.rb #rvim m post<br />vim app/helper/posts_helper.rb #rvim h posts<br />vim app/views/posts/_form.rhtml #rvim v posts/_form.rhtml<br />mongrel_rails cluster::stop #mrc stop<br />mongrel_rails cluster::start #mrc start<br /></code><br /><br />Here is the code (feedback appreciated)<br /><code><br />function rss<br />{<br /> command ruby script/server<br />}<br />function mrd_start<br />{<br /> command mongrel_rails start -d -e development -p 3000<br />}<br />function mrd_stop<br />{<br /> command mongrel_rails stop<br />}<br />function mrd_restart<br />{<br /> command mongrel_rails stop&&mongrel_rails start -d -e development -p 3000<br />}<br />function mrc_start<br />{<br /> command mongrel_rails cluster::start<br />}<br /><br />function mrc_stop<br />{<br /> command mongrel_rails cluster::stop<br />}<br />function mrc_restart<br />{<br /> command mongrel_rails cluster::stop&&mongrel_rails cluster::start<br />}<br /><br />function rvim<br />{<br /> case "$1" in<br /> 'c')<br /> command vim app/controllers/$2_controller.rb<br /> ;;<br /> 'v')<br /> command vim app/views/$2<br /> ;;<br /> 'm')<br /> command vim app/models/$2.rb<br /> ;;<br /> 'h')<br /> command app/helpers/$2_helper.rb<br /><br /> esac<br />}<br /><br /><br /></code><br /><br /><br />Hope this helps!Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-66208660595769574642008-06-19T05:11:00.000-07:002008-06-19T05:32:55.056-07:00Recording Transport with FreePBXIf you have ever used freepbx in production, you know that there are many cases where its not adequate as is for the needs of a call-center. Chances are you got nowhere by going to get help in #freepbx on freenode. You probably went to #asterisk on freenode where people soon figured out you were using freepbx and then began ignoring you (They hate that). Well, I've had to conquer some difficult requests for features with our PBX and often I've had to make custom dialplans for them. I'll post a few of them.<br /><br />We were testing out a new outbound sales campaign that involved qualifying a lead and sending it to a third party. We needed to monitor the third party and make recordings, but they were on a traditional PBX. I thought of using DISA (Direct Inward System Access) but could not find any tutorials or articles on how to setup recording with DISA. I had to do four things:<br /><ol><li>Figure out how recording was done on the agent extensions</li><li>Create a new outbound dial plan pattern and import the recording snippet into the new plan.</li><li>Create some method of communicating the call to a webserver with a CRM on it.</li><li>Create a method of automatically transporting the call to the webserver upon completion</li></ol><br />I figured that creating a new dialplan would be the easiest, since DISA provides a dialtone from inside the system.<br /><br /><code><br />grep -R recording /etc/asterisk<br /></code><br /><br />I noticed several macro entries in extensions.conf. The one that was of particular interest was <br /><code><br />./extensions.conf:exten => s,7,Macro(record-enable,${MACRO_EXTEN},${RecordMethod})<br /></code><br /><br />It looks like record enable takes the extension to record under. In several other places, RecordMethod is set to OUT or IN. After looking at the structure of files in /var/spool/asterisk/monitor. I determined that these variables are probably to help determine the filename of the recording. <br /><br />I created a new dialplan in extensions_custom.conf<br /><br /><code><br />exten => _*123NXXNXXXXXX,1,Answer()<br />exten => _*123NXXNXXXXXX,3,Macro(record-enable,000, OUT)<br />exten => _*123NXXNXXXXXX,4,Dial(SIP/icall/${EXTEN:4},,g)<br />exten => _*123NXXNXXXXXX,5,System(curl http://192.168.1.110/RecordLog.php?args=${EXTEN:4}~${CALLFILENAME})<br />exten => _*123NXXNXXXXXX,6,Hangup<br /></code><br /><br />The dialplan matches a pattern (indicated by _) starting with *123 followed by an area code and 7 digit number. First the call is answered, recording is enabled. The dial is executed. EXTEN:4 removes the first 4 digits from the dial string *1234178575309=4178575309. The missing parameter between the first and last arguments of dial is for the timeout which i want to default to infinite. g is a flag that tells asterisk to continue through the dialplan after the callee hangs up. That means execute priority 5 and 6 after the person being called hangs up.<br /><br />Before I added Priority 5, I had to figure out how I was going to pass the call information to a second server and get it downloaded. I did asterisk -r and executed "dialplan reload". I dialed the *123 pattern from a softphone and watched to see what variables were being set. I saw that CALLFILENAME was being set. I had used curl previously to send data about calls to another server. <br /><br /><br />Priority 5: With callfilename set, I call RecordLog.php on my webserver with args=The number dialer~the callfilename. Why did I do it like that? Its because asterisk kept getting confused when I had an ampersand separating multiple variables for the qs. Asterisk would pass the command to bash, and bash would interpret the ampersand as & in bash (background the current command and execute something else). In retrospect, now that I'm writing this I could have quoted the url.<br /><br /><b>RecordLog.php</b><br /><code><br />//RecordLog.php?cid=${EXTEN:4}&filename=${CALLFILENAME}<br />$h=fopen("/var/www/web1/web/recordings/call_index.htm","a+");<br />$args=$_GET["args"];<br />$bits=explode("~",$args);<br />$cid=$bits[0];<br />$filename=$bits[1];<br />fwrite($h,"<a href='$filename.wav'>$cid\t".$filename."</a><br>");<br />fclose($h);<br />$url="https://192.168.1.237/recordings/misc/download.php?call=".$filename;<br />$output=shell_exec("curl -k $url > /var/www/web1/web/recordings/$filename.wav");<br /></code><br /><br />We open a call_index.htm file for append writing in a password protected recordings directory. We parse the qs and find the cid (caller id) and filename. We write a link to the htm file. We execute a curl command to retrieve the call with that filename from asterisk and pipe the output directly into a wav file. <br /><br /><b>download.php</b>(Modified from /var/www/html/recordings/misc/audio.php)<br /><code><br />if (isset($_GET["call"])) {<br /><br /> $path="/var/spool/asterisk/monitor/".$_GET["call"].".wav";<br /> if (!is_file($path)) { die("<b>404 File not found!</b>"); }<br /><br /> // Gather relevent info about file<br /> $size = filesize($path);<br /> $name = basename($path);<br /> $extension = strtolower(substr(strrchr($name,"."),1));<br /><br /> // This will set the Content-Type to the appropriate setting for the file<br /> $ctype ='';<br /> switch( $extension ) {<br /> case "mp3": $ctype="audio/mpeg"; break;<br /> case "wav": $ctype="audio/x-wav"; break;<br /> case "Wav": $ctype="audio/x-wav"; break;<br /> case "WAV": $ctype="audio/x-wav"; break;<br /> case "gsm": $ctype="audio/x-gsm"; break;<br /><br /> // not downloadable<br /> default: die("<b>404 File not found!</b>"); break ;<br /> }<br /><br /> // need to check if file is mislabeled or a liar.<br /> $fp=fopen($path, "rb");<br /> if ($size && $ctype && $fp) {<br /> header("Pragma: public");<br /> header("Expires: 0");<br /> header("Cache-Control: must-revalidate, post-check=0, pre-check=0");<br /> header("Cache-Control: public");<br /> header("Content-Description: wav file");<br /> header("Content-Type: " . $ctype);<br /> header("Content-Disposition: attachment; filename=" . $name);<br /> header("Content-Transfer-Encoding: binary");<br /> header("Content-length: " . $size);<br /> fpassthru($fp);<br /> }<br />}<br /></code><br /><br />I know there are some dangerous security flaws in this code. I rationalize it like this, this is not accessible except from in our network as our PBX is not exposed directly to the internet. THe original audio.php had some pesky crypt function interfering with things. <br /><br />I hope this was helpful to someoneThomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com2tag:blogger.com,1999:blog-35769561.post-24381273204241893802007-03-30T06:31:00.000-07:002007-03-30T08:07:26.707-07:00Improving Performance with RoundCube Webmail<p>There are many contenders in the market of web-based email clients, few distinguish themselves as much as <a href="http://www.roundcube.net">Roundcube</a> . Although I harbor within myself a profound loathing for any product whose sole star feature is the fact that it uses AJAX, Roundcube has done a great job of using it in such a way that truly streamlines performance and also makes setup a breeze. A similar product by AfterLogic called "MailBee" also markets itself on its AJAX mechanisms however is in my opinion quite a let down in both performance and usability for the 199$ basic edition price tag. Zimbra webmail is also AJAX based however requires at least 1 GB available memory just for the development installation , production requires in upwards of 2GB.</p><br /><br /><p>The only issue I found with Roundcube was the performance factor. Here I have gathered together and outlined several tips that helped me get the most out of this webmail suite.</p><br /><br /><i>Note to ISPConfig Users: I found that it was best to install roundcube manually and create a manual virtual hosts entry. Otherwise, the database will not be properly configured and virtusertable will not properly be used. Resulting in an inability to save identity information effectively</i><br /><br /><b>1. Configuration</b><br /><p>Assuming you are installing Roundcube on a different machine as the webserver. <br /><code><br />$rcmail_config['enable_caching'] = TRUE;<br />$rcmail_config['skip_deleted'] = FALSE;<br /></code><br/><br />Otherwise it would be best to leave caching off, caching stores all emails in the DB - is best for few accounts and those having difficulty with IMAP connections. - by <a href="http://roundcubeforum.net/forum/index.php?action=profile;u=465">MarcB</a></p><br /><br /><br /><b>2. Caching Images</b><br /><p>Although I haven't personally tried this, here is a patch to place in your images/.htaccess to speed up the loading of images - by <a href="http://roundcubeforum.net/forum/index.php?action=profile;u=1328">seansean</a></p><br /><code><br />Index: roundcubemail/skins/default/images/.htaccess<br />===================================================================<br />--- roundcubemail/skins/default/images/.htaccess (revision 0)<br />+++ roundcubemail/skins/default/images/.htaccess (revision 0)<br />@@ -0,0 +1,5 @@<br />+### activate mod_expires<br />+ExpiresActive On<br />+### Expire .gif's 1 day from when they're accessed<br />+ExpiresByType image/gif A86400<br />+ExpiresByType image/png A86400<br />\ No newline at end of file<br /></code><br /><br/><br />Although I can't remember where I heard this, but its been said that you can convert the png's to gif files and then save them in gif format as PNG's. Doesn't sound right, but apparently leads to huge speed improvements by bypassing PNG loading<br /></p><br /><br /><b>3. Code Optimization</b><br /><p>There is a client side optimization and a server side optimization. For the server side , you would use an opcode cache - there is a <a href="http://trac.lighttpd.net/xcache/wiki/InstallFromSource">tutorial for xCache </a> and a <a href="http://www.twobrownshoes.com/node/17">tutorial for APC on Ubuntu</a>. If you decide to <a href="www.webdotdev.com/nvd/server-side/php/alternative-php-cache-apc.html">install APC from source</a> I will say that if you have php5 to run phpize5 instead of phpize and pass php-config5 instead of php-config to ./configure for a modest performance boost.</p><br /><br/><br />Javascript Optimization<br/><br /><p>The total JS for roundcube is absolutely scary - and honestly is my biggest criticism of the package. Although its interface animations and AJAX implementation patterns are implemented quite gracefully, the code and methods used to achieve that effect are another story. The total JS for roundcube is 156kb. You can cut that down to 96KB by using an Javascript Optimization/Compresion utility. I used Dojo ShrinkSafe. Once you get the final file (after adding the app.js, common.js and googiespell.js). You will have to edit some of the source to use the new file (I added common.js last so the result was common.compressed.js). Put this file in lib/js </p><br /><p>You can run from the webmail root<br /><code><br />grep -R "app.js" .<br /></code><br />to find all references to app.js, likewise with common.js<br />in program/include/main.inc change the lines<br /><code><br /> $OUTPUT->include_script('program/js/common.js');<br /> $OUTPUT->include_script('program/js/app.js');<br /></code><br />to<br /><code><br /> $OUTPUT->include_script('program/js/common.compressed.js');<br /></code><br />And finally find all references to googiespell.js using the same command as above and comment them out as it has now been included in common.compressed.js<br /></p><br /><br /><b>4. Optimize Apache</b><br /><p>If you are running apache, you may want to <a href="http://www.serverwatch.com/tutorials/article.php/3436911">optimize your server configuration</a>, and also a2enmod (enable the modules) mod_deflate and mod_cache. <br /></p><br /><br /><br /><b>5. Optimize your Mail Domain Setup</b><br /><p>This got me. Even with all the above modifications , I was still getting 50 seconds to login. I eventually figured it out though, I had given RC the address of my mail server - and I wondered why I couldn't just say localhost to skip name resolution. Then it hit me. Name Resolution. Please ensure that in your /etc/hosts (c:\windows\system32\drivers\etc\hosts) that along with <br /><code><br />127.0.0.1 localhost.localdomain www.mydomain.com<br /></code><br />That there is also an entry for whatever mail server you are accessing with RC. <br /><code><br />127.0.0.1 localhost.localdomain www.mydomain.com mail.mydomain.com<br /></code><br />Upon changing that, the login time went from 50.35 seconds to 1.7 seconds<br /></p><br /><br /><p>Good Luck</p>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com7tag:blogger.com,1999:blog-35769561.post-48002014052565700062007-03-05T07:12:00.000-08:002007-03-05T07:48:19.374-08:00Provider Oriented Architecture vs. Protocol Oriented Architecture Vs. SOAWith all the recent buzz concerning the possible secret malicious and evil intentions of the contact management software Plaxo - I thought it was time to break in with some input on why I believe Plaxo is inherently evil - not in design, nor practice - but in concept. Plaxo is the living embodiment of a "Provider Oriented Architecture" where the benefits surround a single company or provider of a service as opposed to a protocol or actual paradigm of contact management methodology. Instead of converging adherence to a protocol or standard - if they had their way there would be ever increasing conversion rates throughout the market to their specific service.<br /><br />The evil nature of Plaxo rests in its orientation and intention to draw the market into its domain by getting everyone to contribute to their user-contributed content base and eventually get everyone to use their service - which from a business sense would sound just and fair if only the end uses for this type of content were not so blatantly sick and perverted in concept. No one wishes to say it - but its gotta be said - Plaxo maintains personal contact information for all of its users - although its terms of use may say differently (they are also apt to change at any point in time) what other way is there to make money with thousands upon thousands of users' personal information? Its EVIL!<br /><br />What I would propose is an open protocol and personal information management paradigm that circumvents the need for any type of corporation centered service to operate - similar to RSS and falling short of the complexities it would take to implement OpenID based solutions. RSS is a "Protocol Oriented Architecture" and has demonstrated its continuing success over the years - it is not a methodology centered around a company - nor is it necessarily solely centered around SOA (at least in the depraved corporate buzz word sense which has been so rampantly abused in recent times) - rather its centered around the fact that it remains malleable for many use-cases beyond the protocol's initial application to news. Likewise - I would propose to create an open identity protocol without all the complexities of OpenID. I was thinking if everyone had an email address - then the provider of the email address could be their authorization provider.<br /><br /><br /><br /><ol><li>User signs up at a provider (e.g. gmail - email providers would be ideal)<br /></li><li>User sets up contact information distribution settings<br /></li></ol><ul><li>Public Contact Information - contact-info - User sets up password or public access</li></ul>This would be for the same applications as plaxo - to get what information user@gmail.com has set up to be public - the information would be requested from "user:contact-info@gmail.com" in either a blank message or protocol formatted message specifying what fields are being requested regardless - only the information user has set to be public will be released. <br />In the password protected model, the user would provide the password to the requester so that the requester would be able to access the information.<br /><br /><ul><li>Private Account Information account-info - User sets up password</li></ul>For applications where you sign-up for many services - but don't want to fill out information again and again - you could set up multiple tiers for this type of authorization - some tiers releasing specific information - some tiers releasing modified protected information (such as a temporary email address for registration purposes only- integrated with the authorization / email service). The service being signed up for sends a request with the user provided password to "user:account-info@gmail.com".<br /><br /><span style="font-weight: bold;">Benefits of a Protocol through Any Provider Paradigm<br /><span style="font-weight: bold;"></span></span><br />By releasing the protocol from the grip of a specific provider, you enable the user to be completely free to go with whatever provider maintains the best implementation of the protocol - its more democratic.<br />Migrations would be simple - as providing the highest authorization to the new provider to transfer all lower information dissemination tiers.<br />Service management made easy - User Oriented Services.<br />Since the end application would be communicating with the identity authorization provider , the provider would have a list of services for which the user is signed up. The user could pipe these services together within his provider's sandbox into new services in the style of. user:mashup@provider.com - alternately (of course this would exactly work on the gmail platform - however might work where a provider also provides hosting for users) user.provider.com/?somemashup<br /><br /><span style="font-weight: bold;">What you could do with a User Centric Protocol Oriented Architecture<br /><br /></span>What if you were to route their current location service fed from their nextel phone available as a constantly updated JSON latitude longitude feed known as "jps" with a presentation service called "cartographer" which takes latitude and longitude and feeds them to yahoo, live, and google maps. user.provider.com/?jps>latitude,longitude>cartographer to get a current map of the user at any given point in time. The user would have registered their jps service through their user:account-info@provider.com or along the same URL instead of Email paradigm user.provider.com/?account-info - and also registered for cartographer via the same method. Instead of going to cartographer's site or the jps site - the user could go to the mashup and pipe parameters explicitly through the piping service. ?jps>latitude,longitude>cartographer. Optionally, the presentation service could be available as a feed of image urls and could be piped through another service - like a service that pools images from a list of sources (cartographer and flickr) into a flash river animation. user.provider.com/?((jps>latitude,longitude>cartographer>map-url),(flickr>url-stream))>riverpool<br /><br /><br />Wouldn't that be cool? Wouldn't a protocol oriented internet be cooler - simpler and easier than a venture-capital Plaxo centered internet?<br /><br /><span style="font-weight: bold;"><span style="font-weight: bold;"></span><br /></span>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com0tag:blogger.com,1999:blog-35769561.post-75390814663455239062006-12-20T10:13:00.000-08:002006-12-20T10:16:52.287-08:00Recursively rename files with regex (one-liner)<b>Change 'foo' to 'bar' in all filenames under the current directory</b><br/><br /><i>To Preview Your Changes</i><br/><br /><pre><br />find . -type f -print0 | xargs -0 rename -n 's/foo/bar/g'<br /></pre><br/><br /><i>To do the actual rename</i><br /><pre><br />find . -type f -print0 | xargs -0 rename 's/foo/bar/g'<br /></pre>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com2tag:blogger.com,1999:blog-35769561.post-64226461403947931442006-12-19T12:54:00.000-08:002006-12-19T12:55:52.953-08:00Search and Replace Recursively with Bash<span style="font-family:arial;font-size:78%;">All one line!</span><br /><span style="font-family:courier new;font-size:78%;"></span><br /><span style="font-family:courier new;font-size:78%;">find ./*.html -type f -exec sed - i 's/TextToReplace/ReplacementText/' {} \;</span>Thomas W. Devolhttp://www.blogger.com/profile/04987966373446628921noreply@blogger.com1