Pages

Showing posts with label Network. Show all posts
Showing posts with label Network. Show all posts

Wednesday, May 18, 2011

Linksys E2000 Advanced Wireless N Router review


The Linksys E2000 is an affordable Wi-Fi router with excellent features. For the price, you get a 300Mbps 802.11n dual-band wireless access point that can run either in 2.4GHz or 5GHz mode, as well as a four-port Gigabit Ethernet switch. The best part is that it's very reliable and fast.


Just like the older Linksys WAG320N (which unlike the E2000, had a modem), the E2000 can't run both the 2.4GHz network and the 5Ghz network simultaneously, so you'll have to pick one network frequency and stick with it. We can't fault the router for this given its price. You'll have to consider purchasing the Linksys E3000 if you want simultaneous dual-band operation.

The Linksys E2000 looks the same as the older Linksys WAG and WRT models. It doesn't have external antennas, it can be wall-mounted, it has bright LED status lights and its capabilities can be gleaned by the colour of its wing (grey means it has Gigabit Ethernet capability). It lacks fancy features such as storage ports for turning ordinary drives into NAS devices, but if you want this function you can opt for the E2100L. The E2000's web interface hasn't changed, but the special setup software that Linksys ships with its wireless routers has been simplified in a bid to make it as easy as possible for novice users to install it.

In our tests, the E2000 proved to be solid as a rock; we used it with the Billion 5200S RD ADSL2+ modem and our iiNet connection never once dropped out unexpectedly during our week-long test period. Furthermore, the router's wireless performance was stellar.

When transferring video files from our server (which was connected to the router over Ethernet) to our dual-band capable notebook, the E2000 never faltered. Using the 2.4GHz band, we achieved transfer rates of 9.47 megabytes per second from 2m away from the router and 8.38MBps from 10m away. These are excellent results that pretty much blow away the competing routers at the same price point, such as the Netgear Wireless N 300 WNR2000 and the D-Link DIR-600. From 2m away, the 5GHz tests produced the exact same transfer rates as the 2.4GHz tests (9.47MBps), but from 10m the transfer rates dropped slightly to 6.74MBps.

The router offered great range in our test environment; it was able to supply a usable Internet connection from over 35m away, but its range will vary depending on the environment where you install the E2000. We think it will perform well in a mid-sized house for streaming video in addition to sharing a fast Internet connection across many computers.

Linksys has done a lot over the years to try and take the pain out of setting up wireless routers. With the E2000, the supplied CD-ROM autoruns the Cisco Connect software, which goes through all the steps you need to undertake in order to first connect your new wireless router, and then asks you type in your ISP username and password. That's all there is to it. In our tests though, we also had to manually restart the router in order for it to work after the program updated its settings.

The only thing that caught our attention about this setup procedure was how long it took. We were able to set up the router a lot quicker by logging in to the web interface and entering all our details manually. But the CD-ROM is aimed at people who don't want to have to deal with advanced settings, and in this respect it's definitely useful.

Security layers on a modern website




Last time, we looked at a basic website design. Now it’s time to start digging into the details around what’s really being used behind the scenes. This time, we’ll focus on security aspects. When a user starts their browser and connects into the website, there are many layers of security that may be present to ensure only authorized users can access the data. The more valuable the data, the more steps required to ensure that this data is protected. Here is a typical setup for higher value content websites:

Each piece adds a different layer of security. All combined, they can act as a strong defense against unauthorized access of the information. The first two, are best done by a dedicated network engineer or network consulting services company that understands security implementations and is up to date on best practices and the latest procedures.

1) Firewalls are familiar to most. You setup firewalls in a static configuration to stop all traffic – unless to a service that you want available. For this, we use a Juniper SSG20 firewall:

http://www.juniper.net/us/en/products-services/security/ssg-series/ssg20/

2) Next, if you’re on the internet with valuable data, you should have a Intrusion detection system and Intrusion Prevention System. These detect abnormalities and protect the service by denying access from computers that act incorrectly. Perhaps the user fails login more than X times. Perhaps they’re fishing for a URL and doing iterative attempts to try and discover content.
                  
Perhaps they’re requesting a URL that’s a known buffer overflow attack? Juniper makes this unit which is a good starter:

http://www.juniper.net/us/en/products-services/security/idp-series/idp75/

3) From here, the application takes over security responsibility. It’s recommended to have an application server management consultant setup Single Sign On (SSO) due complexity. From there, an application programmer can handle the rest.

There’s typically a SSO that allows for authentication of the user. This authentication can take the form of a simple user/password. It can be extended to require SecureID cards with randomizing passwords. Or higher end retinal, face recognition, or fingerprint scanners may be used depending on the value of the data being presented.

4) After a person authenticates, they need to be authorized for the data they’re requesting. This is typically an LDAP lookup against Active Directory or some other.

5) Most companies stop here and allow the user full access to the SSL web service. However, one thing that’s common today: You no longer can trust the client computer that’s connecting into the site. It could be a company Desktop or Laptop and relatively safe. It could be a personal smart phone, iPad or other PDA, or worse: a public computer. You can no longer treat all computers the same.

For instance, what happens when your CEO logs into a kiosk at the airport because his computer broke. He/She needs to approve the latest acquisition plan of the XYZ Company. He/She can authenticate correctly…he’ll be authorized to see the content. What’s stopping a download of this criticalinformation onto a public PC? Will any of this data be left in cache after he logs off? This is where Endpoint Integrity Checks are used.

Endpoint integrity are checks against the client PC. They can be as simple as: Did you run a virus scan in the past 30 days. More typical today is: will the data remain secure if loaded on the device. Is there an encrypted hard drive? Is there a BIOS password? Is the device a “sanctioned” platform?

6) SSL encryption of the transaction between the client and server. This is the last step of defense. Information passed between the two computers are encrypted in transit.

Security today is complex. And the cost of getting it wrong is harsh. During RSA 2011 discussions, the average cost for a single incident: $250-300k. Has HIPPA been compromised? Is Sarbanes–Oxley affected? Did company confidentialinformation get disclosed? In the end, a better plan and execution up front can save money and aggravation in the end.

Tuesday, November 2, 2010

Installing and configuring a Wireless Router

GadgTechWorld.blogspot.com

http://www.mobilefish.com/images/tutorials/linksys_install.gif
A wireless router affords laptop or portable computer users greater mobility in their homes and businesses. In most home networks, wireless routers are connected to a cable or DSL modem, and the router sends the signals and information that make up an Internet protocol (IP) thread to the user's computer via radio signals rather than wires.
To communicate with the wireless router, individual computers house transceivers such as an internal expansion card, a peripheral docked by USB or, in the case of laptops, a PC card or hard-wired internal device. For those accessing the Internet through a high-speed connection, a wireless router can also serve as a hardware firewall (as opposed to a software program), enabling protection from undesirable outside computers without exhausting as many system resources as traditional firewall programs.


Installing a Wireless Router
First, turn off the PC and modem, then remove the Ethernet cable from the PC and plug it into the router's WAN port. Install a second Ethernet cable between the PC's Ethernet port and one of the router's Ethernet ports. Power the modem, router and then PC, waiting for the system to boot and initialize before attempting an Internet connection.
Most routers are programmed with the manufacturer's default settings, including the network's name or service set identifier (SSID), channel and sign-on password. These default settings generally may be changed using included software or an online setup utility provided by the router's manufacturer.
Configure the router by entering the router configuration IP address at the URL provided, followed by the configuration utility ID and the default password. To find your router's default IP address and the default login info, refer to the owner's manual. If you own a Linksys router, a popular brand, the IP address is usually 192.168.1.1. Two other popular brands, D-Link and Netgear, generally use 192.168.0.1.

Connecting a Printer to a Wireless Router

First, check the documentation that came with your printer to determine how it's designed to connect to computers. Connections made through Ethernet, USB and (obsolescent) parallel ports are common, but newer printers are sometimes wireless-enabled and allow you to simply add them to your wireless network.
Refer to the owner's manual for your particular wireless router. Inside, find the default IP address and the default login info. If you own a Linksys router, the IP address is usually 192.168.1.1. Other popular brands D-Link and Netgear generally use 192.168.0.1.

Connecting Two Wireless Routers

Bridging two wireless routers involves configuring both networks manually. Visit each network location - which should be recognized by your computer automatically - and configure the appropriate Service Set Identifier (SSID), Wireless Encryption Protocol (WEP) or WiFi Protected Access (WPA) key and authentication information. Make sure you know the SSID and WEP or WPA key ahead of time.

Connecting an Xbox 360 to a Wireless Router

Connecting an Xbox 360 to a wireless router allows you to use Xbox Live without physically connecting your Xbox to a cable. Power up your Xbox system and the router, then plug the wireless networking adapter into the two slots at the Xbox's rear. Unplug any existing Ethernet cables and connect the USB connector to the port adjacent to the adapter. Use the system area of the Xbox dashboard to adjust your network settings. This should connect you to the wireless network.

Monday, November 1, 2010

DHCP-(Dynamic Host Control Protocol)

IP addresses, by contrast, not only must be unique on a given internetwork, but also must reflect the structure of the internetwork. As noted above, they contain a network part and a host part, and the network part must be the same for all hosts on the same network. Thus, it is not possible for the IP address to be configured once into a host when it is manufactured, since that would imply that the manufacturer knew which hosts were going to end up on which networks, and it would mean that a host, once connected to one network, could never move to another. For this reason, IP addresses need to be reconfigurable.
http://www.networkingreviews.com/images/dhcp-server-client.jpg
In addition to an IP address, there are some other pieces of information a host needs to have before it can start sending packets. The most notable of these is the address of a default router—the place to which it can send packets whose destination address is not on the same network as the sending host. Most host operating systems provide a way for a system administrator, or even a user, to manually configure the IP information needed by a host. However, there are some obvious drawbacks to such manual configuration. One is that it is simplya lot of work to configure all the hosts in a large network directly, especially when you consider that such hosts are not reachable over a network until they are configured.


Even more importantly, the configuration process is very error-prone, since it is necessary to ensure that every host gets the correct network number and that no two hosts receive the same IP address. For these reasons, automated configuration methods are required. The primary method uses a protocol known as the Dynamic Host Configuration Protocol (DHCP).

DHCP relies on the existence of a DHCP server that is responsible for providing configuration information to hosts. There is at least one DHCP server for an administrative domain. At the simplest level, the DHCP server can function just as a centralized repository for host configuration information. Consider, for example, the problem of administering addresses in the internetwork of a large company. DHCP saves the network administrators from having to walk around to every host in the company with a list of addresses and network map in hand and configuring each host manually. Instead, the configuration information for each host could be stored in the DHCP server and automatically retrieved by each host when it is booted or connected to the network. However, the administrator would still pick the address that each host is to receive; he would just store that in the server. In this model, the configuration information for each host is stored in a table that is indexed by some form of unique client identifier, typically the “hardware address” (e.g., the Ethernet address of its network adaptor). A more sophisticated use of DHCP saves the network admininstrator from even having to assign addresses to individual hosts. In this model, the DHCP server maintains a pool of available addresses that it hands out to hosts on demand. This considerably reduces the amount of configuration an administrator must do, since now it is only necessary to allocate a range of IP addresses (all with the same network number) to each network.

Since the goal of DHCP is to minimize the amount of manual configuration required for a host to function, it would rather defeat the purpose if each host had to be configured with the address of a DHCP server. Thus, the first problem faced by DHCP is that of server discovery.

To contact a DHCP server, a newly booted or attached host sends a DHCPDISCOVER message to a special IP address (255.255.255.255) that is an IP broadcast address. This means it will be received by all hosts and routers on that network. (Routers do not forward such packets onto other networks, preventing broadcast to the entire Internet.) In the simplest case, one of these nodes is the DHCP server for the network. The server would then reply to the host that generated the discovery message (all the other nodes would ignore it). However, it is not really desirable to require one DHCP server on every network because this still creates a potentially large number of servers that need to be correctly and consistently configured. Thus, DHCP uses theconcept of a relay agent. There is at least one relay agent on each network, and it isconfigured with just one piece of information: the IP address of the DHCP server. Whena relay agent receives a DHCPDISCOVER message, it unicasts it to the DHCP server and awaits the response, which it will then send back to the requesting client. The processof relaying a message from a host to a remote DHCP server is shown in diagram 1-A Diagram 1-B shows the format of a DHCP message. The message is actually sent using a protocol called UDP (the User Datagram Protocol) that runs over IP. 
DHCP is derived from an earlier protocol called BOOTP, and some of the packet fields are thus not strictly relevant to host configuration. When trying to obtain configuration information, the client puts its hardware address (e.g., its Ethernet address) inthe chaddr field. The DHCP server replies by filling in the yiaddr (“your” IP address)field and sending it to the client. Other information such as the default router to be used by this client can be included in the options field.

In the case where DHCP dynamically assigns IP addresses to hosts, it is clear that hosts cannot keep addresses indefinitely, as this would eventually cause the server to exhaust its address pool. At the same time, a host cannot be depended upon to give back its address, since it might have crashed, been unplugged from the network,or been turned off. Thus, DHCP allows addresses to be “leased” for some period of time. Once the lease expires, the server is free to return that address to its pool. A host with a leased address clearly needs to renew the lease periodically if in fact it is still connected to the network and functioning correctly.  It is important to note that DHCP may also introduce some more complexity into network management,since it makes the binding between physical hosts and IP addresses muchmore dynamic. This may make the network manager’s job more difficult if, for example,it becomes necessary to locate a malfunctioning host.

HSCSD (High Speed Circuit Switching Data)

Hi this is one data transfering technology used in the GSM mobiles for internet usage.

High-speed circuit switched data (HSCSD) is a feature that enables the co-allocation of multiple full rate traffi c channels (TCH/F) of GSM into an HSCSD confi guration. The aim of HSCSD is to provide a mixture of services with different air interface user rates by a single physical layer structure. The available capacity of an HSCSD confi guration is several times the capacity of a TCH/F, leading to a significant enhancement in air interface data transfer capability.
http://www.tech-faq.com/images/Article_Images/High-Speed-Circuit-Switched-Data.jpg

Ushering faster data rates into the mainstream is the new speed of 14.4 kbps per time slot and HSCSD protocols that approach wireline access rates of up to 57.6 kbps by using multiple 14.4 kbps time slots. The increase from the current baseline of 9.6 kbps to 14.4 kbps is due to a nominal reduction in the error- correction overhead of the GSM radio link protocol (RLP), allowing the use of a higher data rate.

For operators, migration to HSCSD brings data into the mainstream, enabled in many cases by relatively standard software upgrades to base station (BS) and mobile switching center (MSC) equipment. Flexible air interface resource allocation allows the network to dynamically assign resources related to the air interface
usage according to the network operator’s strategy, and the end-user’s request for a change in the air interface resource allocation based on data transfer needs. The provision of the asymmetric air interface connection allows simple mobile equipment to receive data at higher rates than otherwise would be possible with a symmetric connection.

For end-users, HSCSD enables the roll-out of mainstream high-end segment services that enable faster web browsing, fi le downloads, mobile video-conference and navigation, vertical applications, telematics, and bandwidth-secure mobile local area network (LAN) access. Value-added service providers will also be able
to offer guaranteed quality of service and cost-effi cient mass-market applications, such as direct IP where users make circuit-switched data calls straight into a GSM network router connected to the Internet. To the end-user, the value-added service provider or the operator is equivalent to an Internet service provider that offers a fast, secure dial-up Internet protocol service at cheaper mobile-to-mobile rates. HSCSD is provided within the existing mobility management.  Roaming is also possible. The throughput for an HSCSD connection remains constant for the duration of the call, except for interruption of transmission during handoff. The handoff is simultaneous for all time slots making up an HSCSD connection. Endusers wanting to use HSCSD have to subscribe to general bearer services. Supplementary services applicable to general bearer services can be used simultaneously with HSCSD.



Firmware on most current GSM PC cards needs to be upgraded. The reduced RLP layer also means that a stronger signal strength is necessary. Multiple time slot usage is probably only effi ciently available in off-peak times, increasing overall off-peak idle capacity usage. HSCSD is not a very feasible solution for bursty
data applications.


ENHSCSD=FALSE,
object: BSC [BASICS]
range: TRUE, FALSE
default: FALSE

Enable HSCSD, this parameter specifies whether the feature 'High Speed Circuit Switched Data (HSCSD)' is enabled for the BSC or not.

Notes:
1) This parameter enables HSCSD for the BSC base only. To activate it, however, it must be explicitly enabled for each BTS (see CREATE BTS [BASICS]: BTSHSCSD).

2) As a mandatory precondition for HSCSD the features 'early classmark sending' (see SET BTS  [OPTIONS]:EARCLM) and 'pooling' (see parameter ENPOOL) must be enabled! 

Principle: HSCSD is a feature which allows the 'bunching' of up to 4 consecutive radio timeslots for data connections of up to 38,4 (= 4 x 9,6) kbit/s (multislot connections). The data rate depends on the bearer capability requested by the MS and the negotiation result between MS and MSC. Each HSCSD connection consists of 1 main TCH which carries the main signalling (both FACCH and SACCH) and further 1..3  secondary TCHs. All radio timeslots used for one connection are FR timeslots located on the same TRX and
use the same frequency hopping mode and the same TSC. 

Connection modes: There are 2 types of multislot connections:
Symmetric and asymmetric ones. In symmetric mode all secondary TCHs are bi-directional (UL and DL) and in asymmetric mode the secondary channels are only uni-directional (DL) TCHs or can be a mix of  bi-directional and uni-directional TCHs (example: One 'HSCSD 3+2' call consists of: one main TCH, one secondary bi-directional TCH and one secondary uni-directional TCH). The downlink based asymmetry allows the use of a receive rate higher than the transmission rate and is thus very typical for Internet  applications. The asymmetric mode is only possible for non-transparent data connections.

Resource allocation: The BSC is responsible for the flexible air resource allocation. It may alter the number of TCH/F as well as the channel codings used for the connection. Reasons for the change of the resource allocation may be either the lack of radio resources, handover and/or the maintenance of the service quality. The change of the air resource allocation is done by the BSC using 'service level upgrading and downgrading' procedures. For transparent HSCSD connections the BSC is not allowed to change the user data rate, but it may alter the number of TCHs used by the connection (in this case the data rate per TCH changes). For non-transparent calls the BSC is also allowed to downgrade the user rate to a lower value.

Handover: 

In symmetric mode individual signal level and quality reporting for each used channel is applied. For an asymmetric HSCSD configuration individual signal level and quality reporting is used for the main TCH. The quality measurements reported on the main channel are based on the worst quality measured on the main and the unidirectional downlink timeslots used. In both symmetric and asymmetric HSCSD configuration the neighbour cell measurements are reported on each uplink channel used. All TCHs used in an HSCSD connection are handed over simultaneously. The BSC may alter the number of timeslots used for the connection and the channel codings when handing the connection over to the new channels. All kinds of inter-cell handovers are supported, intracell handover is possible only with cause 'complete to inner' or 'inner to complete'.

Tuesday, October 26, 2010

RedMere makes some skinny HDMI cables

 


Most of you have probably used HDMI cable for your home theater, and this thick cable is going to be more in demand.

After all, many phones like the Nokia N8 have an HDMI port so video files on the phone can be viewed on bigger screens. If every phone has this feature, then not only will it increase the demand for HDMI cables, but create a need for them to be thinner, lighter, and more portable. 

It would appear that a company called Red Mere is ahead of the curve as they are making thinner HDMI cables. Guess which one of the HDMI cable cross-sections in the photo above is a RedMere. 

How is this possible? I talked to RedMere at CTIA Fall 2010, and they use some sort of chip that fits in the HDMI plug itself. They wanted to show me the chip, but they seemed to have lost it at their booth. 

I’m not certain what to think of that, but I won’t hold it against the company. In fact, I think RedMere is going to have a lucrative future when everyone needs an HDMI cable. Perhaps there will come a day when all phones have an HDMI cable that can stretch and retract, and RedMere will be the supplier.

Wednesday, October 20, 2010

Wireless Sensor Network Vs Ad Hoc Networking

In recent article I written about wireless sensor network. Here I written about the relation between Wireless Sensor Network and Ad Hoc Networking.

Wireless sensor network applications require wireless ad hoc networking techniques. Although many protocols and algorithms have been proposed for traditional wireless ad hoc networks, they are not well suited for the unique features and application requirements of wireless sensor networks. The differences between wireless sensor networks and traditional wireless ad hoc networks are listed here:

The number of sensor nodes in a wireless sensor network can be several orders of magnitude higher than the nodes in a wireless ad hoc network. In a wireless sensor network, sensor nodes are densely deployed. Sensor nodes are prone to failure.

The topology of a wireless sensor network changes very frequently. Sensor nodes mainly use broadcast communication paradigms whereas most traditional ad hoc networks are based on point-to-point communications.

Sensor nodes are limited in power, computational capabilities, and memory. Sensor nodes may not have global identifi cation because of the large amount of overhead and large number of sensors. Another factor that distinguishes wireless sensor networks from traditional mobile ad hoc networks (MANETs) is that the end goal is the detection/estimation of some event(s) of interest, and not just communication. To improve detection performance, it is often quite useful to fuse data from multiple sensors . Data fusion requires the transmission of data and control messages. This need may impose constraints on network architecture.

The large number of sensing nodes may congest the network with information. To solve this problem, some sensors, such as cluster heads, can aggregate the data, perform some computation (e.g., average, summation, highest value, etc.), and then broadcast the summarized new information.

Wireless Sensor Network

I think this article will be more useful who are interested in networking. Here I wrote about the wireless sensor networks

A wireless sensor network contains a large number of tiny sensor nodes that are densely deployed either inside the phenomenon to be sensed or very close to it. Sensor nodes consist of sensing, data processing, and communicating components. The position of sensor nodes need not be engineered or predetermined.

Wired sensor networks have been around for decades, with an array of gauges measuring temperature, fluid levels, humidity, and other attributes on pipelines, pumps, generators, and manufacturing lines. Many of these run as separately wired networks, sometimes linked to a computer but often to a control panel that fl ashes lights or sounds an alarm when a temperature rises too high or a machine vibrates too much. Also wired in are actuators, which let the control panel slow down a pump or start a fan in response to the sensor data.

Now advances in silicon radio chips, coupled with cleverly designed routing algorithms and network software are promising to eliminate those wires and their installation and maintenance costs. Mesh network topologies will let these wireless networks route around nodes that fail or whose radio signal is degraded by interferencefrom heavy equipment.

A gateway will create a two-way link with legacy control systems, hosts, wired local area networks (WLANs), or the Internet Wireless sensor networks can use several different wireless technologies, including IEEE 802.11 WLANs, Bluetooth, and radio frequency identifi cation (RFID). But at present most of the applications are of low-power radios having a range of about 30 to 200 feet and data rates of up to around 300 kbps. IEEE 802.15.4 is the approved low-rate standard for a simple, short-range wireless network whose radio components could run several years on a single battery.

EDGE

EDGE

EDGE makes use of the existing GSM infrastructure in a highly effi cient manner. Radio network planning will not be greatly affected since it will be possible to reuse many existing BTS sites. GPRS packet switching nodes will be unaffected, because they function independently of the user bit rates, and any modifi cations to the switching nodes will be limited to software upgrades. There is also a smooth evolution path defi ned for terminals to ensure that EDGE-capable terminals will be small and competitively priced.

EDGE-capable channels will be equally suitable for standard GSM services, and no special EDGE, GPRS, and GSM services will be needed. From an operator viewpoint this allows a seamless introduction of new EDGE services — perhaps starting with the deployment of EDGE in the service hot spots and gradually expanding coverage as demand dictates. The roll-out of EDGE-capable BSS hardware can become part of the ordinary expansion and capacity enhancement of the network. The wideband data capabilities offered by EDGE allows a step-by-step evolution to IMT-2000, probably through a staged deployment of the new 3G air interface on the existing core GSM network. Keeping GSM as the core network for the provision of 3G wireless services has additional commercial benefi ts. It protects the investment of existing operators; it helps to ensure the widest possible customer base from the outset; and it fosters supplier competition through the continuous evolution of systems.

GSM operators who win licenses in new 2 GHz bands will be able to introduce IMT-2000 wideband coverage in areas where early demand is likely to be greatest. Dual-mode EDGE/IMT-2000 mobile terminals will allow full roaming and handoff from one system to the other, with mapping of services between the two systems. EDGE will contribute to the commercial success of the 3G system in the vital early phases by ensuring that IMT-2000 subscribers will be able to enjoy roaming and interworking globally.

Building on an existing GSM infrastructure will be relatively fast and inexpensive, compared to establishing a total 3G system. The intermediate move to GPRS and later to EDGE will make the transition to 3G easier. While GPRS and EDGE require new functionality in the GSM network, with new types of connections to external packet data networks they are essentially extensions of GSM. Moving to a GSM/IMT-2000 core network is likewise a further extension of this network.


SEVICES OFFERED BY EDGE


PS Services. The GPRS architecture provides IP connectivity from the mobile station to an external fi xed IP network. For each service, a QoS profi le is defi ned. The QoS parameters include priority, reliability, delay, and maximum and mean bit rate. A specifi ed combination of these parameters defi nes a service, and different services can be selected to suit the needs of different applications. CS Services. The current GSM standard supports both transparent (T) and nontransparent (NT) services. Eight transparent services are defi ned, offering constant bit rates in the range of 9.6 to 64 kbps.

A nontransparent service uses RLP to ensure virtually error-free data delivery. For this case, there are eight services offering maximum user bit rates from 4.8 to 57.6 kbps. The actual user bit rate may vary according to channel quality and the resulting rate of transmission.

The introduction of EDGE implies no change of service defi nitions. The bit rates are the same, but the way services are realized in terms of channel coding is different. For example, a 57.6 kbps nontransparent service can be realized with coding scheme ECSD TCS-1 (telephone control channel-1) and two time slots, while the same service requires four time slots with standard GSM using coding scheme TCH/F14.4. Thus, EDGE CS transmission makes the high bit rate services available with fewer time slots, which is advantageous from a terminal implementation perspective. Additionally, since each user needs fewer time slots, more users can be accepted which increases the capacity of the system.

Asymmetric Services Due to Terminal Implementation. ETSI has standardized two mobile classes: one that requires only GMSK transmission in the uplinkand 8-PSK in the downlink and one that requires 8-PSK in both links. For the first class, the uplink bit rate is limited to that of GSM/GPRS, while the EDGE bit rate is still provided in the downlink. Since most services are expected to require higher bit rates in the downlink than in the uplink, this is a way of providing attractive services with a low complexity mobile station. Similarly, the number of time slots available in the uplink and downlink need not be the same. However, transparent services will be symmetrical.

WLAN

Hi visitors this is a article about the Wireless Local Area Networks and its Equipment.

WLAN

With the success of wired local area networks (LANs), the local computing market is moving toward wireless LAN (WLAN) with the same speed of current wired LAN. WLANs are fl exible data communication systems that can be used for applications in which mobility is required. In the indoor business environment, although mobility is not an absolute requirement, WLANs provide more fl exibility than that achieved by the wired LAN. WLANs are designed to operate in industrial, scientifi c, and medical (ISM) radio bands and unlicensed-national information infrastructure (U-NII) bands. In the United States, the Federal Communications Commission (FCC) regulates radio transmissions; however, the FCC does not require the end-user to buy a license to use the ISM or U-NII bands. Currently, WLANs can provide data rates up to 11 Mbps, but the industry is making a move toward high-speed WLANs. Manufacturers are developing WLANs to provide data rates up to 54 Mbps or higher. High speed makes WLANs a promising technology for the future data communications market.

The IEEE 802.11 committee is responsible for WLAN standards. WLANs include IEEE 802.11a (WiFi 5), IEEE 802.11b (WiFi), IEEE 802.11g and IEEE 802.11n The deployment of WLANs can provide connectivity in homes, factories, and hot-spots. The IEEE 802.16 group is responsible for wireless metropolitan area network (WMAN) standards. This body is concerned with fixed broadband wireless access systems, also known as “last mile” access networks. In this chapter, we focus on different types of WLANs and introduce IEEE 802.16 standards including WiMAX (high speed WLAN).


WLAN EQUIPMENTS

There are three main links that form the basis of the wireless network. These are:

LAN adapter: Wireless adapters are made in the same basic form as their wired counterparts: PCMCIA, Card bus, PCI, and USB. They also serve the same function, enabling end-users to access the network. In a wired LAN, adapters provide an interface between the network operating system and the wire. In a WLAN, they provide the interface between the network operating system and an antenna to create a transparent connection to the network.

Access point (AP): The AP is the wireless equivalent of an LAN hub. It receives, buffers, and transmits data between the WLAN and the wired network, supporting a group of wireless user devices. An AP is typically connected with the backbone network through a standard Ethernet cable, and communicates with wireless devices by means of an antenna. The AP or antenna connected to it is generally mounted on a high wall or on the ceiling. Like cells in a cellular network, multiple APs can support handoff from one AP to another as the user moves from area to area. APs have a range from 20 to 500 meters. A single AP can support between 15 to 250 users, depending on technology, confi guration, and use. It is relatively easy to scale a WLAN by adding more APs to reduce network congestion and enlarge the coverage area. Large networks requiring multiple APs deploy them to create overlapping cells for constant connectivity to the network. A wireless AP can monitor movement of a client across its domain and permit or deny specifi c traffi c or clients from communicating through it.

Outdoor LAN bridges: Outdoor LAN bridges are used to connect LANs in different buildings. When the cost of buying a fi ber optic cable between buildings is considered, particularly if there are barriers such as highways or bodies of water in the way, a WLAN can be an economical alternative. An outdoor bridge can provide a less expensive alternative to recurring leasedline charges. WLAN bridge products support fairly high data rates and ranges of several miles with the use of line-of-sight directional antennas. Some APs can also be used as a bridge between buildings of relatively close proximity.

Tuesday, October 19, 2010

Internet Architecture


In this article i written about the internet architecture.

The Internet architecture, which is also sometimes called the TCP/IP architecture after
its two main protocols. The Internet architecture evolved out of experiences with an earlier packet-switched network called the ARPANET. Both the Internet and the ARPANET were funded by the Advanced Research Projects Agency (ARPA), one of the R&D funding agencies of the U.S. Department of Defense. The Internet and ARPANET were around before the OSI architecture, and the experience gained from building them was a major influence on the OSI reference model. While the seven-layer OSI model can, with some imagination, be applied to the Internet, a four-layer model is often used instead. At the lowest level are a wide variety of network protocols, denoted NET1, NET2, and so on. In practice, these protocols are implemented by a combination of hardware (e.g., a network adaptor) and software (e.g., a network device driver). For example, you might find Ethernet or Fiber Distributed Data Interface (FDDI) protocols at this layer. (These protocols in turn may actually involve several sublayers, but the Internet architecture does not presume anything about them.) The second layer consists of a single protocol—the Internet Protocol (IP). This is the protocol that supports the interconnection of multiple networking technologies into a single, logical internetwork. The third layer contains two main protocols—the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). TCP and UDP provide alternative logical channels to application programs: TCP provides a reliable byte-stream channel, and UDP provides an unreliable datagram delivery channel (datagram may be thought of as a synonym for message). In the language of the Internet, TCP and UDP are sometimes called end-to-end protocols, although it is equally correct to refer to them as transport protocols.
Running above the transport layer are a range of application protocols, such as FTP, TFTP (Trivial File Transport Protocol), Telnet (remote login), and SMTP (Simple Mail Transfer Protocol, or electronic mail), that enable the interoperation of popular applications. To understand the difference between an application layer protocol and an application, think of all the different World Wide Web browsers that are available (e.g., Mosaic, Netscape, Internet Explorer, Lynx, etc.). There are a similarly large number of different implementations of Web servers. The reason that you can use any one of these application programs to access a particular site on the Web is because they all conform to the same application layer protocol: HTTP (HyperText Transport Protocol). Confusingly, the same word sometimes applies to both an application and the application layer protocol that it uses (e.g., FTP). The Internet architecture has three features that are worth highlighting.

First, as the Internet architecture does not imply strict layering. The application is free to bypass the defined transport layers and to directly use IP or one of the underlying networks. In fact, programmers are free to define new channel abstractions or applications that run on top of any of the existing protocols.

Second, if you look closely at the protocol graph you will notice an hourglass shape—wide at the top, narrow in the middle, and wide at the bottom. This shape actually reflects the central philosophy of the architecture. That is, IP serves as the focal point for the architecture—it defines a common method for exchanging packets among a wide collection of networks. Above IP can be arbitrarily many transport protocols, each offering a different channel abstraction to application programs.

Thus, the issue of delivering messages from host to host is completely separated from the issue of providing a useful process-to-process communication service. Below IP, the architecture allows for arbitrarily many different network technologies, ranging from Ethernet to FDDI to ATM to single point-to-point links.

A final attribute of the Internet architecture (or more accurately, of the IETF culture) is that in order for someone to propose a new protocol to be included in the architecture, they must produce both a protocol specification and at least one (and preferably two) representative implementations of the specification. The existence of working implementations is required for standards to be adopted by the IETF. This cultural assumption of the design community helps to ensure that the architecture’s protocols can be efficiently implemented.

NEURAL NETWORKS


Hi this a article about the neural network and their importance.

A neural network is composed of a number of nodes, or units, connected by links. Each link has a numeric weight associated with it. Weights are the primary means of long-term storage in neural networks, and learning usually takes place by updating the weights. Some of the units are connected to the external environment, and can be designated as input or output units. The weights are modified so as to try to bring the network's input/output behavior more into line with that of the environment providing the inputs.

Each unit has a set of input links from other units, a set of output links to other units, a current activation level, and a means of computing the activation level at the next step in time, given its inputs and weights. The idea is that each unit does a local computation based on inputs from its neighbors, but without the need for any global control over the set of units as a whole. In practice, most neural network implementations are in software and use synchronous control to update all the units in a fixed sequence.

To build a neural network to perform some task, one must first decide how many units are to be used, what kind of units are appropriate, and how the units are to be connected to form a network. One then initializes the weights of the network, and trains the weights using a learning algorithm applied to a set of training examples for the task.3 The use of examples also implies that one must decide how to encode the examples in terms of inputs and outputs of the network.

Notation
Neural networks have lots of pieces, and to refer to them we will need to introduce a variety of mathematical notations. For convenience, see the diagram.

Simple computing elements Figure 19.4 shows a typical unit. Each unit performs a simple computation: it receives signals from its input links and computes a new activation level that it sends along each of its output links. The computation of the activation level is based on the values of each input signal received from a neighboring node, and the weights on each input link. The computation is split into two components. First is a linear component, called the input function, int, that computes the weighted sum of the unit's input values. Second is a nonlinear component called the activation function, g, that transforms the weighted sum into the final value that serves as the unit's activation value, a,. Usually, all units in a network use the same activation function

The total weighted input is the sum of the input activations times their respective weights:

See the formula you can understand the computing.

Saturday, October 16, 2010

Artificial Neural Network

Artificial Neural Network (ANN)

Artificial neural networks are a method of information processing and computation that takes benefit of today's technology. Mimicking the processes present in biological neurons, artificial neural networks are used to predict and learn from a given set of data information. At data analysis neural networks are more robust than statistical methods because of their capability to handle small variations of parameters and noise.

An Artificial Neural Network is data information processing paradigm that is inspired by the way biological nervous systems such as the brain process information. The important element of this paradigm is the novel structure of the information processing system. It is created of a large number of highly interconnected processing elements known as neurons working in unison to solve specific problems. ANN is similar to people which learn by example. An ANN is defined for a specific application such as pattern recognition or data classification through a learning process. Learning in biological systems includes adjustments to the synaptic connections that exist between the neurons.

Why to use neural networks?
Either humans or other computer techniques use it to determine patterns and detect trends that are too complex to be noticed. In the category of information has been given to process, a trained neural network can be considered as an "expert".

It has following advantages:
a) Adaptive learning:
Capability to learn tasks based on the given data for training or initial experience.
b) Self-Organisation:
It can create its own organisation or representation of the information it receives during learning time.
c) Real Time Operation:
Its computations may be carried out in parallel and special hardware devices are being designed and manufactured which take advantage of this capability.
d) Fault Tolerance via Redundant Information Coding:
Partly destruction of a network leads to the corresponding degradation of performance. However some network capabilities may be retained even with major network damage.

Neural networks versus conventional computers
Neural networks have a different approach to problem solving than that of conventional computers. Conventional computers use an algorithmic approach in order to solve a problem. The problem cannot solve the problem until the specific steps that computer needs to follow are known. That limits the problem solving capability of conventional computers to problems that we already understand and know how to solve.

Neural networks and human brains process information in a similar way. The network is created from large number of highly interconnected processing elements working in parallel to solve a specific problem. Neural networks learn from example. They can’t be programmed to do a specific task.

Neural networks and conventional algorithmic computers are complements to each other. Neural networks tasks are more suited to an algorithmic approach like arithmetic operations. Large number of systems uses combination of the two approaches in order to perform at maximum efficiency.

Different architecture of neural networks
1) Feed-forward networks :
Feed-forward ANNs permit signals to transfer one way from input to output. There is no response i.e. the output of any layer doesn’t affect that same layer. Feed-forward ANNs tend to be straightforward networks that correlate inputs with outputs. They are widely used in pattern recognition. This type of organisation is called as bottom-up or top-down.

2) Feedback networks :
By using loops in the network, Feedback networks transfer signals in both directions. Feedback networks are powerful and complex. Feedback networks state is changing dynamically until they reach an equilibrium point. Until the input changes, they remain at the equilibrium point. Feedback architectures are called as interactive or recurrent.

3) Network layers:
Artificial neural network includes three layers of units: a layer of "input" units is connected to a layer of "hidden" units, which is connected to a layer of "output" units.

Input units:
The action of the input units represents the raw information that is fed into the network.

Hidden units:
The action of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units.

Output units:
The behaviour of the output units depends on the action of the hidden units and the weights between the hidden and output units.

4) Perceptrons

The most influential work on neural network went under the heading of 'perceptrons' a term coined by Frank Rosenblatt. The perceptron comes out to be an MCP model with some additional, fixed, preprocessing. Association units and their task are to remove specific, localized featured from the input images. Perceptrons mimic the basic idea behind the human visual system. They were used for pattern recognition even though their abilities extended a lot more.

Learning Process
The patterns and the subsequent response of the network can be divided into two general paradigms:
1) Associative mapping
In associated mapping the network learns to create a particular pattern on the set of input units whenever another particular pattern is applied on the set of input units. The associative mapping can be divided into two mechanisms:

1a) Auto-association:
An input pattern is related with itself and the states of input and output units coincide. This provides pattern completion to create a pattern whenever a portion of it or a distorted pattern is presented. In the second case, the network actually saves pairs of patterns building relationship between two sets of patterns.

1b) Hetero-association:
It is associated with two recall mechanisms:

Nearest-neighbor recall:
Where the output pattern created corresponds to the input pattern saved, which is closest to the pattern presented.

Interpolative recall:
Where the output pattern is a similarity-based interpolation of the patterns saved corresponding to the pattern presented.

2) Regularity detection
This unit corresponds to particular properties of the input patterns. Whereas in associative mapping the network saves the associations among patterns in regularity detection the response of each unit has a particular 'meaning'. This type of learning mechanism is vital for feature discovery and knowledge representation.

Every neural network has knowledge, which is contained in the values of the connections weights. Modifying the knowledge saved in the network as a function of experience means a learning rule for changing the values of the weights. Information is saved in the weight matrix of a neural network. Learning is the purpose of the weights. Learning is performed as follow; we can divide 2 types of neural networks:

i) Fixed networks
In which the weights remain the same. In such networks, the weights are fixed a priori regarding to the problem to solve.
ii) Adaptive networks
In which the weights do not remain same. For this network all learning methods can be classified into two major types:
Supervised learning
This incorporates an external teacher so that each output unit is told what its desired response to input signals ought to be. Global information may be required during the learning process. Paradigms of supervised learning consist error-correction learning, reinforcement learning and stochastic learning.

Unsupervised learning
It uses no external teacher and is dependent upon only local information. It is also called as self-organisation because it self-organizes data presented to the network and detects their emergent collective properties.

Transfer Function
Artificial Neural Network based on both the weights and the input-output function, which is specified for the units. This function typically falls into one of three types:
a) linear (or ramp)
b) threshold
c) sigmoid
For linear units the output action is proportional to the total weighted output.
For threshold units the output is set at one of two levels, based on whether the total input is greater than or less than some threshold value.
For sigmoid units the output varies rapidly but not linearly as the input changes. Sigmoid units allow a greater resemblance to real neurons than do linear or threshold units, but all three must be considered rough approximations.

To make a neural network that performs some specific work, we must choose how the units are interconnected to one another and we must set the weights on the connections appropriately. The connections decide whether it is possible for one unit to influence another. The weights define the strength of the influence.

Applications of neural networks

1) Detection of medical phenomena:
A variety of health based indices e.g., a combination of heart rate, levels of various substances in the blood, respiration rate can be observed. The onset of a particular medical condition could be related with a very complex mixing of changes on a subset of the variables being observed. Neural networks have been used to identify this predictive pattern so that the appropriate treatment can be specified.

2) Stock market prediction:
Fluctuations of stock prices and stock indices are complex, multidimensional deterministic phenomenon. Neural networks are used by many technical analysts to make decisions about stock prices dependent upon a large number of factors such as past performance of other stocks.

3) Credit assignment
For a loan a number of pieces of data information are usually known about an applicant. For instance, the applicant's age, education, occupation and many other data information may be present. After training a neural network on historical data, neural network analysis can determine the most relevant characteristics and use those to classify applicants as good or bad credit risks.

Different types of MODEMS

Cable modems and ADSL modems are examples of faster modems in use today. Cable modems use the cable TV infrastructure to provide users with an access to digital signals. It harnesses the high bandwidth of cable television networks. Read about finding the best cable modem.

ADSL modems connect computers or routers to a DSL phone line. Some DSL modems allow the sharing of ADSL service between a group of computers. For a comparative study of DSL and cable modems, read about choosing the best Internet service.

Satellite modems use communication satellites as relays to bring about data transfers. These modems convert bit streams into radio signals. They provide Internet users with a satellite Internet access.

Wireless modems have revolutionized Internet access as they can offer Internet connectivity with the use of the very ubiquitous mobile phones. Mobile phones serve as gateways between the service provider and computers. Find information about the wireless Internet access.