Thursday, June 11, 2009

Uploading Files To MySQL Database

Uploading Files To MySQL Database:

Using PHP to upload files into MySQL database sometimes needed by some web application. For instance for storing pdf documents or images to make som kind of online briefcase (like Yahoo briefcase).

For the first step, let's make the table for the upload files. The table will consist of.

1. id : Unique id for each file
2. name : File name
3. type : File content type
4. size : File size
5. content : The file itself



For column content we'll use BLOB data type. BLOB is a binary large object that can hold a variable amount of data. MySQL have four BLOB data types, they are :

* TINYBLOB
* BLOB
* MEDIUMBLOB
* LONGBLOB

Since BLOB is limited to store up to 64 kilobytes of data we will use MEDIUMBLOB so we can store larger files ( up to 16 megabytes ).
CREATE TABLE upload (
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(30) NOT NULL,
type VARCHAR(30) NOT NULL,
size INT NOT NULL,
content MEDIUMBLOB NOT NULL,
PRIMARY KEY(id)
);

Uploading a file to MySQL is a two step process. First you need to upload the file to the server then read the file and insert it to MySQL.

For uploading a file we need a form for the user to enter the file name or browse their computer and select a file. The input type="file" is used for that purpose.



Example : upload.php
Source code : upload.phps

An upload form must have encytype="multipart/form-data" otherwise it won't work at all. Of course the form method also need to be set to method="post". Also remember to put a hidden input MAX_FILE_SIZE before the file input. It's to restrict the size of files.

After the form is submitted the we need to read the autoglobal $_FILES. In the example above the input name for the file is userfile so the content of $_FILES are like this :

$_FILES['userfile']['name']
The original name of the file on the client machine.

$_FILES['userfile']['type']
The mime type of the file, if the browser provided this information. An example would be "image/gif".

$_FILES['userfile']['size']
The size, in bytes, of the uploaded file.

$_FILES['userfile']['tmp_name']
The temporary filename of the file in which the uploaded file was stored on the server.

$_FILES['userfile']['error']
The error code associated with this file upload. ['error'] was added in PHP 4.2.0

Example : upload.php


if(isset($_POST['upload']) && $_FILES['userfile']['size'] > 0)
{
$fileName = $_FILES['userfile']['name'];
$tmpName = $_FILES['userfile']['tmp_name'];
$fileSize = $_FILES['userfile']['size'];
$fileType = $_FILES['userfile']['type'];

$fp = fopen($tmpName, 'r');
$content = fread($fp, filesize($tmpName));
$content = addslashes($content);
fclose($fp);

if(!get_magic_quotes_gpc())
{
$fileName = addslashes($fileName);
}

include 'library/config.php';
include 'library/opendb.php';

$query = "INSERT INTO upload (name, size, type, content ) ".
"VALUES ('$fileName', '$fileSize', '$fileType', '$content')";

mysql_query($query) or die('Error, query failed');
include 'library/closedb.php';

echo "
File $fileName uploaded
";
}
?>
Before you do anything with the uploaded file. You should not assume that the file was uploaded successfully to the server. Always check to see if the file was successfully uploaded by looking at the file size. If it's larger than zero byte then we can assume that the file is uploaded successfully.

PHP saves the uploaded file with a temporary name and save the name in $_FILES['userfile']['tmp_name']. Our next job is to read the content of this file and insert the content to database. Always make sure that you use addslashes() to escape the content. Using addslashes() to the file name is also recommended because you never know what the file name would be.

That's it now you can upload your files to MySQL. Now it's time to write the script to download those files.

Saturday, June 6, 2009

Waikato Environment for Knowledge analysis (WEKA)

WEKA?:

Waka is a collection of machine learning algorithms for solving real-world data mining problems. It is written in Java and runs on almost every platform. You can access the WEKA class library from your own Java program, and implement new machine learning algorithms.

schemes in WEKA:

There are three major implemented schemes in WEKA--
(1) Implemented schemes for classification
(2) Implemented schemes for numeric prediction
(3) Implemented "meta-schemes"

WEKA also contains a large variety of tools that can be used for pre-processing datasets, so that you can focus on your algorithm without considering too much details as reading the data from files, implementing filtering algorithm and providing code to evaluate the results.

How to install and run WEKA on CS account?:

1) Log into your CS account.

2) Create a directory called weka using:

> mkdir weka

> cd weka

3) Create a symbolic link to our installed weka pack:

> ln -s /home/course/cs573x/weka-3-4/weka.jar

4) You are able to run weka now:

> java -jar weka.jar

*note: this will only work on Linux or Solaris workstations, as this will run GUI version of WEKA

5) Please make the following two links to enable you to access the source code of weka and some sample datasets.

> ln -s /home/course/cs573x/weka-3-4/weka

> ln -s /home/course/cs573x/weka-3-4/data

The following step is to set the CLASSPATH environment variable prior to use the command line. You need to indicate the location of the weka.jar file.

1) For sh, ksh and bash users:

Please add "export CLASSPATH=/home/course/cs573x/weka-3-4/weka.jar:$CLASSPATH" into your shell configuration profile.

2) For csh and tcsh users:

Please add "setenv CLASSPATH /home/course/cs573x/weka-3-4/weka.jar" into your shell configuration profile.

3) To test if it is set correctly:

Type "java weka.classifiers.trees.j48.J48 " (note the change from previous versions, in case you are familiar with any), it should display a list of all learning options for J48. If it displays an exception error message, then you will check if you set the environment variable correctly. Please make sure that you also set the correct path to Java, so that the system can locate and run Java.

Now, the installation is done! You can run WEKA either in command line or in graphic user interface. Remember only in Linux or a Solaris workstation can you run GUI, telnet won't work.

Thursday, May 28, 2009

Software Engineering: Three-Tier or n-tier Architecture

Three-Tier Architecture:

In three-tier architecture (often referred to as n-tier architecture) is a client-server architecture in which the presentation and the application processing and the data management are logically separate processes.

Tier 1: the client contains the presentation logic, including simple control and user input validation. This application is also known as a thin client.


Tier 2: the middle tier is also known as the application server, which provides the business processes logic and the data access.


Tier 3: the data server provides the business data.



These are some of the advantages of a three-tier architecture:

It is easier to modify or replace any tier without affecting the other tiers.
Separating the application and database functionality means better load balancing.
Adequate security policies can be enforced within the server tiers without hindering the clients.


A three-tier architecture is a client-server architecture in which the functional process logic, data access, computer data storage and user interface are developed and maintained as independent modules on separate platforms. Three-tier architecture is a software design pattern and a well-established software architecture.
5 Benefits of a three-Tier Architecture
Here are 5 benefits of separating an application into tiers:
1.    It gives you the ability to update the technology stack of one tier, without impacting other areas of the application.
2.    It allows for different development teams to each work on their own areas of expertise. Today’s developers are more likely to have deep competency in one area, like coding the front end of an application, instead of working on the full stack.
3.    You are able to scale the application up and out. A separate back-end tier, for example, allows you to deploy to a variety of databases instead of being locked into one particular technology. It also allows you to scale up by adding multiple web servers.
4.    It adds reliability and more independence of the underlying servers or services.
5.    It provides an ease of maintenance of the code base, managing presentation code and business logic separately, so that a change to business logic, for example, does not impact the presentation layer.


Most companies understand the weaknesses of their aging legacy system but still have to deliver products and services, while they pay employees and perform other mission critical operations. In short, there is only so much to go around.
An N-Tier architecture has a presentation layer and two separate server layers - a business logic or application layer and a data layer.
The client becomes the presentation layer and handles the user interface. The application layer functions between the other two layers, sending the client's data requests to the data layer. The client is freed of application layer tasks, which eliminate the need for powerful client technology.

Why N-Tier is right for mission-critical systems.
In the N-Tier model, a departmental client could initiate some departmental business logic on the departmental application server(s) which, as part of a network transaction, could update the departmental database(s) and then initiate business logic on the enterprise application server(s).
These enterprise application server(s) could then update the enterprise database server(s). All of this takes place under the umbrella of a network transaction.
Any one of the chain of application server(s) could initiate a rollback which would be cascaded to all of the application server(s) involved. This capability allows a delegated approach to how business rules are implemented.
This business logic can access data in legacy mainframe operating systems such as CICS / VSAM, IDMS, and/or SQL compliant RDBMS servers such as Oracle, SYBASE, Interbase, DB2 etc, running on a variety of Wintel or UNIX platforms.
In addition, as the business processes are identified and appropriate business logic is implemented on the application server(s), these services could then be globally advertised.

Friday, May 22, 2009

OutsourcingByElance

Elance Username: csenayeem025
Company: AnavaSoft (AnavaSoft Tech Park)
Category: Web & Programming,

================
Personal Message:

Its professional site. It will help u to become a provider.

Sincerely,

csenayeem025

================

Some people enjoy article writing, other people find it tedious, some people find graphic design to be painfully difficult, others struggle with getting their website online and so forth. At Elance you can outsource one-time or occasional tasks for a one off fee. These may be contracted to experts in various areas of affiliate marketing including: writing content for your websites (including newsletters, articles, free reports etc), graphic design (e.g. for your website header graphic, or free report cover, etc), programming, voice over work and more. Just to give you a little more detail, the Elance services categories go beyond those just used by an affiliate marketer and expand out to include:
*    Programmers – web, software, SEO, mobile, blogs, database and others.
*   Designers – graphic, logo, animation, illustration, banners, brochures and others.
*    Writers – articles, web content, blogs, translations, copywriting, technical writing, ghost writing, e-books and others.
*    Marketers – advertising, SEM, social media, sales lead generation, telemarketing, e-mailers, research and surveys.
*    Administration – customer service, virtual assistants, data entry, web research, e-mail handling, transcription, word processing and others.
*    Consultants – accounting, finance, engineering, legal, product design, human resources, management and others.

There are more than 150,000 experts in various fields available on Elance.

A good post will result in many bids on your project, often dozens.

It pays to be very specific when it comes to your job posts, that makes it easier for Elancers to bid, and gives you a greater chance that you’ll receive what you expect.
I hope you enjoyed this post and I’m keen to hear your comments and questions. While I have a lot of people working for me in house, I’ve just ticked over $100,000 in spending on Elance, so it is certainly a place that has been very helpful for me, especially when I have more projects going on than my staff can handle.

Sunday, May 17, 2009

CISCO Router Configurat

Basic Cisco Router Config with BGP Uplink
Do you have your own /24 IP subnet and want to setup a BGP router? This article will gave a basic overview of the key components required. The syntax used is for an IOS 12.2 Cisco 6500 series, but is applicable to a 7600 series, a 7200 series, or even a 1800 or 2800 series router.

Assumptions:
1. The /24 subnet we are announcing is 200.50.75.0/24.
2. The IPv4 WAN Subnet from our upstream BGP provider is 128.64.12.128/30.
3. Upstream BGP peer’s AS is 1000, and our AS is 17500
So to begin, we assume our BGP uplink is delivered to us via a basic Cat5 handoff. This handoff has a static WAN subnet of 128.64.12.128/30 – our side of the WAN is 128.64.12.130 and the provider’s side of the WAN is 128.64.12.129. That also means our default GW is 128.64.12.129.

We connect this Cat5 uplink to FastEthernet7/1 on our 6500. Now we need to go into the 6500 series router, config the WAN link, then do all the BGP configs so we can start using our /24 subnet.

OSPF Concepts

The current version of OSPF that is used for IPv4 is version 2 that was defined in Request for Comments (RFC) 2328. OSPF offers many advantages over previous routing protocols (RIP) in that it is very flexible and scalable to a number of different environments including those that are very large. OSPF is a link state routing protocol; this means that each OSPF device is tasked with maintaining a complete ‘map' of the routes available throughout the network. The OSPF metric is based on the interface bandwidth.

Summary

While the configuration of OSPF can get complex, once some basic concepts are understood a basic configuration is not all that confusing to understand or complete. The configuration of OSPF in a large scale implementation can be a bit hard to follow when quickly looking over a configuration, but if the network engineer knows the base OSPF concepts and sits down with the configuration then they should be able to figure out the intention of the OSPF design. 

Basic Linux Commands


What is Linux?

Linux is mainly used in Servers. Linux is an Operating System’s Kernel. You might have heard of UNIX. Well, Linux is a UNIX clone. But it was actually created by Linus Torvalds from Scratch.
The following contents will be presented:
  • Use the basic Linux commands touchcpcdlsmvmkdirrm and pwd
  • Basic file read/write using echocat and the Vi text editor
  • Search for a file or file contents using file and grep
  • Compress and decompress folders using tar
  • Get the system date and time using the timedatectldate and hwclock commands
  • Create, run in the background and kill a shell script, using the chmodps and kill commands
  • Monitor system performance using the system manager htop
  • Run a script automatically at boot time, by creating a service
  • Change the system password using passwd
  • Download files, using wget
  • Use the opkg package manager to list, install and remove packages

Commands

1.  ls

The ls command displays the names of the files and directories in the current working directory. A number of options are available that allow you to specify what details about the files should be shown.

2.   cd

If you want to change the current working directory you would use the command cd. For example cd correspondence would set the current working directory to correspondence, if it exits.
3.  pwd
When you first open the terminal, you are in the home directory of your user. To know which directory you are in, you can use the “pwd” command. It gives us the Absolute Path, which means the path that starts from the root. The root is the base of the Linux filesystem. It is denoted by a forward slash( / ). The user directory is usually something like /home/username.
4. mkdir & rmdir
The mkdir command is used when you need to create a folder or a directory. For Example, if you want to make a directory called “DIY”, then you can type “mkdir DIY”. Remember, as told before, if you want to create a directory named “DIY Hacking”, then you can type mkdir DIY\ Hacking”.
rmdir is the command used for deleting a directory. But, rmdir can only be used to delete an empty directory. To delete a directory containing files, rm is used.
5. rm
The rm command is used to delete files and directories. rm cannot simply delete a directory. “rm -r” is used to delete a directory. In this case, it deletes both the folder and the files in it.


yppasswd -- to change passwd
ls -- to list files and directories
cd -- to change directory
pico -- To create or edit a file
vim -- Advance text editor
chmod -- change file access permissions
pwd -- shows the "present working directory"
cp -- copy the files/diectories
mv -- move or rename the file/directory
rm -- remove files/directory
mkdir -- make a new directory
rmdir -- deletes a (empty) directory
date -- show date and time
cal -- show calendar
du --show file space usage
logout -- make you exit
man -- show the manual pages
tin -- to check newsgroups
telnet -- to login into other computer/server
ssh -- secure login into other computer/server
finger -- look for information about users logged on server
talk -- to other user
w -- Show who is logged on and what they are doing
write -- to other users
ftp -- to transfer files from one computer to another
cat -- print the file(s) on standard output
alias -- alias a command
locate -- locate a file containing some expression
grep -- print lines matching a pattern
df -- show harddisk partitions

tar -- compress/uncompress files/directories
tar -xvf file.tar -- uncompress a file 'file.tar'
tar -xzvf file.tar.gz -- uncompress a file 'file.tar.gz'
tar -xjvf file.tar.bz2 -- uncompress a file 'file.tar.bz2'
tar -cvf file.tar file -- compress the file 'file' to 'file.tar'
tar -czvf file.tar.gz file -- compress the file 'file' to file.tar.gz
tar -cjvf file.tar.bz2 file -- compress the file 'file' to file.tar.bz2

Wednesday, May 6, 2009

MoneyFromOnlineProgramming

Everyone knows that there are websites out there that can help you earn money, of course, but before you scour the internet for ideas, check out this list. It is important for everyone to learn how to use their programming skills to lead a good life and be comfortable with their lifestyle choices. And that’s why, today, we will go through different ways a programmer can monetize his/her skills.
Beginners tend to struggle when monetizing their skills. Many of them have no idea where to start, or even what to do with their skills. It is easy to get lost and waste time to do nothing, and that’s why it is necessary to know different ways to make your skills count.
The latest trend is showcasing your skills by broadcasting real-world projects and building an audience. Also, don’t overlook traditional methods including freelancing, teaching others, and much more.
However, before we start, it is important to understand the need for programmers in the industry. Programmers are in huge demand right now, and the demand will only increase in the near future.
As a beginner, you need to make sure you don’t lose focus and be patient in anything you try.
How to monetize your programming skills as a beginner
1.Start Freelancing: Freelancing is growing at a rapid race. Even though freelancing is a great option, it does require more attention and patience than a traditional job. You can try many online freelancing platforms such as UpWork, HackerEarth, LiveEdu and others to get started.

2. Use broadcasting to showcase your talent and build your portfolio: It is not like a traditional resume where you just list your past projects, and the client needs to go to your GitHub repository. It is more of a visual display of work that you have done, and how you complete assignments.

3. Work on open-source projects and build a portfolio for long-term benefits: As a beginner, most of your energy should go in building a good online presence, and open- source projects do help a lot in this regard. You can also choose to broadcast your open-source projects and make the most of your invested time.

4. Volunteer for a non-profit organization and build relationships: Volunteering for them not only helps you understand the current state of computer science but also helps you get into one of the paid jobs that they may have to offer. Many non-profit platforms also offer placement guidance and internships.

5. Write about the technology that interests you: As a blogger, you can write about anything. You can choose to be a Java blogger and start a Java blogging website, or if you are front-end lover, you can start a blog for front-end engineers. The choice is all yours.

Conclusion

Now, you are better informed on how to get started. As you can see, there is no single path for beginners. You can choose to be a front-end engineer, a technical writer, a teacher, and much more. All you need to do is find the path that interests you most and keeps walking it until you succeed.
If you want to earn your best, then you need to keep all the above points in mind. If you think that some important points have been missed, don’t forget to leave your comments below and let us know.

WEKA: The Waikato Environment for Knowledge Analysis

The WEKA machine learning workbench has grown out of the need to be able to apply machine learning to real world data sets in a way that promotes a “what if?…” or exploratory approach. Each machine learning algorithm implementation requires the data to be present in its own format, and has its own way of specifying parameters and output. The WEKA system was designed to bring a range of machine learning techniques or schemes under a common interface so that they may be easily applied to this data in a consistent method. This interface should be flexible enough to encourage the addition of new schemes, and simple enough that users need only concern themselves with the selection of features in the data for analysis and what the output means, rather than how to use a machine learning scheme.
Applications: The WEKA system has been applied successfully in a variety of areas including the areas of agriculture, machine learning research and education.

Agricultural:
The most significant project so far carried out using the WEKA workbench has been the analysis of dairy herd data for the purposes of isolating rules that describe factors that farmers might use for culling decisions [10]. This involved working with a large data set of 19 103 records containing 705 attributes spread across 10 herds and 6 years. About 40 new attributes were derived, including attributes like age and production index relative to herd, and these were added to the original data set which was then processed by various machine learning schemes.

Research:
The WEKA system has also proved to be a valuable in machine learning research. Firstly it is useful in the area of supporting the development of new machine learning algorithms from both the stand point of implementation and evaluation. The presence of defined data set file formats and tools to access and manipulate the contents of data sets reduces the effort required in getting data into a new scheme. The presence of a common output format which can be evaluated using the PREval tool also takes the effort of evaluation away from the development process.

Education:
WEKA has also been used in a limited role to introduce students of an advanced undergraduate course on machine learning to the subject and to the capabilities of the different sorts of schemes.