## Sunday, June 26, 2011

### JAVA IEEE Projects Abstracts

AN EFFECTIVE ARCHITECTURE FOR AUTOMATED APPLIANCE MANAGEMENT SYSTEM APPLYING ONTOLOGY-BASED CLOUD DISCOVERY

ABSTRACT
Cloud computing is a computing paradigm which allows access of computing elements and storages on-demand over the Internet. Virtual Appliances, pre-configured, ready to run applications are emerging as a breakthrough technology to solve the complexities of service deployment on Cloud infrastructure.
However, an automated approach to deploy required appliances on the most suitable Cloud infrastructure is neglected by previous works which is the focus of this work.
In this paper, we propose an effective architecture using ontology-based discovery to provide QoS aware deployment of appliances on Cloud service providers. In addition, we test our approach on a case study and the result shows the efficiency and effectiveness of the proposed work.

KEYWORDS
Cloud Computing; Virtual Appliances; Semantic Web Service; Web Service Modeling Ontology (WSMO); Service Level Agreements (SLA); Open Virtualization Format (OVF).

======================================================
IEEE TRANSACTIONS ON INFORMATION THEORY
VOL. 56, NO. 1, JANUARY 2010

ANALYSIS OF ABSORBING SETS AND FULLY ABSORBING SETS OF ARRAY-BASED LDPC CODES

ABSTRACT
The class of low-density parity-check (LDPC) codes is attractive, since such codes can be decoded using practical message- passing algorithms, and their performance is known to approach the Shannon limits for suitably large block lengths. For the intermediate block lengths relevant in applications, however, many LDPC codes exhibit a so-called “error floor,” corresponding to a significant flattening in the curve that relates signal-to-noise ratio (SNR) to the bit-error rate (BER) level.
Previous work has linked this behavior to combinatorial substructures within the Tanner graph associated with an LDPC code, known as (fully) absorbing sets. These fully absorbing sets correspond to a particular type of near-codewords or trapping sets that are stable under bit-flipping operations, and exert the dominant effect on the low BER behavior of structured LDPC codes.
This paper provides a detailed theoretical analysis of these (fully) absorbing sets for the class of array-based LDPC codes, including the characterization of all minimal (fully) absorbing sets for the array-based LDPC codes for and moreover, it provides the development of techniques to enumerate them exactly. Theoretical results of this type provide a foundation for predicting and extrapolating the error floor behavior of LDPC codes.

INDEX TERMS
Absorbing set, bit-flipping, error floor, low-density parity-check (LDPC ) codes, message passing decoding, nearcodeword, trapping set.

======================================================

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
VOL. 59, NO. 1, JANUARY 2010 383

A DISTRIBUTED CONSENSUS-BASED COOPERATIVE SPECTRUM-SENSING SCHEME IN COGNITIVE RADIOS

ABSTRACT
In cognitive radio (CR) networks, secondary users can cooperatively sense the spectrum to detect the presence of primary users. In this paper, we propose a fully distributed and scalable cooperative spectrum-sensing scheme based on recent advances in consensus algorithms.
In the proposed scheme, the secondary users can maintain coordination based on only local information exchange without a centralized common receiver.
Unlike most of the existing decision rules, such as the OR-rule or the 1-out-of-N rule, we use the consensus of secondary users to make the final decision.
Simulation results show that the proposed consensus scheme can have significant lower missing detection probabilities and false alarm probabilities in CR networks. It is also demonstrated that the proposed scheme not only has proven sensitivity in detecting the primary user’s presence but also has robustness in choosing a desirable decision threshold.

INDEX TERMS
Cognitive radios (CRs), consensus, cooperative spectrum sensing, random graphs.

======================================================

IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS VOL. 21, NO. 2, FEBRUARY 2010

A DISTRIBUTED PROTOCOL TO SERVE DYNAMIC GROUPS FOR PEER-TO-PEER STREAMING

ABSTRACT
Peer-to-peer (P2P) streaming has been widely deployed over the Internet. A streaming system usually has multiple channels, and peers may form multiple groups for content distribution. In this paper, we propose a distributed overlay framework (called SMesh) for dynamic groups where users may frequently hop from one group to another while the total pool of users remain stable.
SMesh first builds a relatively stable mesh consisting of all hosts for control messaging. The mesh supports dynamic host joining and leaving, and will guide the construction of delivery trees. Using the Delaunay Triangulation (DT) protocol as an example, we show how to construct an efficient mesh with low maintenance cost.
We further study various tree construction mechanisms based on the mesh, including embedded, bypass, and intermediate trees. Through simulations on Internet-like topologies, we show that Smesh achieves low delay and low link stress.

INDEX TERMS
Peer-to-peer streaming, dynamic group, Delaunay triangulation.

======================================================
IEEE TRANSACTIONS ON MOBILE COMPUTING
VOL. 9, NO. 5, MAY 2010

A SCALABLE AND ENERGY-EFFICIENT CONTEXT MONITORING FRAMEWORK FOR MOBILE PERSONAL SENSOR NETWORKS

ABSTRACT
The key feature of many emerging pervasive computing applications is to proactively provide services to mobile individuals. One major challenge in providing users with proactive services lies in continuously monitoring users’ context based on numerous sensors in their PAN/BAN environments.
The context monitoring in such environments imposes heavy workloads on mobile devices and sensor nodes with limited computing and battery power. We present SeeMon, a scalable and energy-efficient context monitoring framework for sensor-rich, resource-limited mobile environments. Running on a personal mobile device, SeeMon effectively performs context monitoring involving numerous sensors and applications.
On top of SeeMon, multiple applications on the mobile device can proactively understand users’ contexts and react appropriately. This paper proposes a novel context monitoring approach that provides efficient processing and sensor control mechanisms.
We implement and test a prototype system on two mobile devices: a UMPC and a wearable device with a diverse set of sensors. Example applications are also developed based on the implemented system. Experimental results show that SeeMon achieves a high level of scalability and energy efficiency.

INDEX TERMS
Context monitoring, shared and incremental processing, sensor control, energy efficiency, personal computing, portable devices, ubiquitous computing, wireless sensor network, pervasive computing.

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING
VOL. 9, NO. 6, JUNE 2010 765

ACHIEVABLE CAPACITY IN HYBRID DS-CDMA/OFDM SPECTRUM-SHARING

ABSTRACT
In this paper, we consider DS-CDMA/OFDM spectrum sharing systems and obtain the achievable capacity of the secondary service under different subchannel selection policies in the fading environment.
Subchannel selection policies are divided into two categories: uniform subchannel selection, and nonuniform subchannel selection. Uniform subchannel selection is preferred for cases where a priori knowledge on subchannels state information is not available at the secondary transmitter.
For cases with available a priori knowledge on subchannels state information, we study various nonuniform subchannel selection policies. In each case, we obtain the optimum secondary service power allocation and the corresponding maximum achievable capacity.
Then we present results on the scaling law of the opportunistic spectrum sharing in DS-CDMA/OFDM systems with multiple users. Numerical results show that the optimal subchannel selection is based on the minimum value of the subchannel gain between the secondary transmitter and the primary receiver.

INDEX TERMS
Dynamic spectrum access networks, DS-CDMA networks, interference threshold, OFDM, opportunistic spectrum access, spectrum sharing.

======================================================
IEEE TRANSACTIONS ON MOBILE COMPUTING
AUGUST 2010 (VOL. 9 NO. 8)

ACHIEVING GUARANTEED ANONYMITY IN
GPS TRACES VIA
UNCERTAINTY-AWARE PATH CLOAKING

ABSTRACT
The integration of Global Positioning System (GPS) receivers and sensors into mobile devices has enabled collaborative sensing applications, which monitor the dynamics of environments through opportunistic collection of data from many users' devices. One example that motivates this paper is a probe-vehicle-based automotive traffic monitoring system, which estimates traffic congestion from GPS velocity measurements reported from many drivers.
This paper considers the problem of achieving guaranteed anonymity in a locational data set that includes location traces from many users, while maintaining high data accuracy. We consider two methods to reidentify anonymous location traces, target tracking, and home identification, and observe that known privacy algorithms cannot achieve high application accuracy requirements or fail to provide privacy guarantees for drivers in low-density areas.
To overcome these challenges, we derive a novel time-to-confusion criterion to characterize privacy in a locational data set and propose a disclosure control algorithm (called uncertainty-aware path cloaking algorithm) that selectively reveals GPS samples to limit the maximum time-to-confusion for all vehicles.
Through trace-driven simulations using real GPS traces from 312 vehicles, we demonstrate that this algorithm effectively limits tracking risks, in particular, by eliminating tracking outliers. It also achieves significant data accuracy improvements compared to known algorithms.
We then present two enhancements to the algorithm. First, it also addresses the home identification risk by reducing location information revealed at the start and end of trips. Second, it also considers heading information reported by users in the tracking model. This version can thus protect users who are moving in dense areas but in a different direction from the majority.

======================================================
ACHIEVING SECURE, SCALABLE, AND FINE-GRAINED DATA ACCESS CONTROL IN CLOUD COMPUTING

ABSTRACT
Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners.
To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when finegrained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved.
This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in finegrained data access control to untrusted cloud servers without
disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption.
Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.

======================================================

IEEE TRANSACTIONS ON COMPUTERS
VOL. 59, NO. 5, MAY 2010 707

ADAPTATION OF REPUTATION MANAGEMENT SYSTEMS TO DYNAMIC NETWORK CONDITIONS IN AD HOC NETWORKS

ABSTRACT
Reputation management systems have been proposed as a cooperation enforcement solution in ad hoc networks. Typically, the functions of reputation management (evaluation, detection, and reaction) are carried out homogeneously across time and space. However, the dynamic nature of ad hoc networks causes node behavior to vary both spatially and temporally due to changes in local and network-wide conditions.
When reputation management functions do not adapt to such changes, their effectiveness, measured in terms of accuracy (correct identification of node behavior) and promptness (timely identification of node misbehavior), may be compromised. We propose an adaptive reputation management system that realizes that changes in node behavior may be driven by changes in network conditions and that accommodates such changes by adapting its operating parameters.
We introduce a time-slotted approach to allow the evaluation function to quickly and accurately capture changes in node behavior. We show how the duration of an evaluation slot can adapt according to the network’s activity to enhance the system accuracy and promptness. We then show how the detection function can utilize a Sequential Probability Ratio Test (SPRT) to distinguish between cooperative and misbehaving neighbors.
The SPRT adapts to changes in neighbors’ behavior that are a by-product of changing network conditions, by using the node’s own behavior as a benchmark. We compare our proposed solution to a nonadaptive system, showing the ability of our system to achieve high accuracy and promptness in dynamic environments. To the best of our knowledge, this is the first work to explore the adaptation of the reputation management functions to changes in network conditions.

INDEX TERMS
Reputation management, ad hoc networks, cooperation, adaptation, node misbehavior.

======================================================
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
VOL. 22, NO. 8, AUGUST 2010

ADAPTIVE JOIN OPERATORS FOR RESULT RATE OPTIMIZATION ON STREAMING INPUTS

ABSTRACT
Adaptive join algorithms have recently attracted a lot of attention in emerging applications where data are provided by autonomous data sources through heterogeneous network environments. Their main advantage over traditional join techniques is that they can start producing join results as soon as the first input tuples are available, thus, improving pipelining by smoothing join result production and by masking source or network delays.
In this paper, we first propose Double Index NEsted-loops Reactive join (DINER), a new adaptive two-way join algorithm for result rate maximization. DINER combines two key elements: an intuitive flushing policy that aims to increase the productivity of in-memory tuples in producing results during the online phase of the join, and a novel reentrant join technique that allows the algorithm to rapidly switch between processing in-memory and disk-resident tuples, thus, better exploiting temporary delays when new data are not available.
We then extend the applicability of the proposed technique for a more challenging setup: handling more than two inputs. Multiple Index NEsted-loop Reactive join (MINER) is a multiway join operator that inherits its principles from DINER.
Our experiments using real and synthetic data sets demonstrate that DINER outperforms previous adaptive join algorithms in producing result tuples at a significantly higher rate, while making better use of the available memory. Our experiments also shows that in the presence of multiple inputs, MINER manages to produce a high percentage of early results, outperforming existing techniques for adaptive multiway join.

INDEX TERMS
Query processing, join, DINER, MINER, streams.

======================================================

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
VOL. 9, NO. 2, FEBRUARY 2010 505

ADAPTIVE MULTI-NODE INCREMENTAL RELAYING FOR HYBRID-ARQ IN AF RELAY NETWORKS

ABSTRACT
This paper proposes an adaptive multi-node incremental relaying technique in cooperative communications with amplify-and-forward (AF) relays. In order to reduce the excessive burden of MRC with all diversity paths at the destination node, the destination node decides if it combines signals over the first ??(< ??) time slots/frames or over all of the ?? times slots, where ?? is the number of relay nodes.

Our analytical and simulation results show that the proposed adaptive multi-node incremental relaying outperforms the conventional MRC in terms of outage probability in AF based cooperative communications since the proposed scheme effectively reduces the spectral efficiency loss. Our asymptotic analysis also shows that the proposed adaptive multi-node incremental relaying achieves full diversity order

INDEX TERMS
Relay communications, hybrid-ARQ, amplifyand- forward.

======================================================

IEEE/ACM TRANSACTIONS ON NETWORKING
VOL. 18, NO. 1, FEBRUARY 2010 307

ALWAYS ACYCLIC DISTRIBUTED
PATH COMPUTATION

ABSTRACT
Distributed routing algorithms may give rise to transient loops during path recomputation, which can pose significant stability problems in high-speed networks. We present a new algorithm, Distributed Path Computation with Intermediate Variables (DIV), which can be combined with any distributed routing algorithm to guarantee that the directed graph induced by the routing decisions remains acyclic at all times.
The key contribution of DIV, besides its ability to operate with any routing algorithm, is an update mechanism using simple message exchanges between neighboring nodes that guarantees loop-freedom at all times.
DIV provably outperforms existing loop-prevention algorithms in several key metrics such as frequency of synchronous updates and the ability to maintain paths during transitions. Simulation results quantifying these gains in the context of shortest path routing are presented.
In addition, DIV’s universal applicability is illustrated by studying its use with a routing that operates according to a nonshortest path objective. Specifically, the routing seeks robustness against failures by maximizing the number of next-hops available at each node for each destination.

INDEX TERMS
Distance-vector routing, loop-free routing.

======================================================

AN ANALYSIS OF TRACES FROM A PRODUCTION
MAPREDUCE CLUSTER

ABSTRACT
MapReduce is a programming paradigm for parallel processing that is increasingly being used for data-intensive applications in cloud computing environments.
An understanding of the characteristics of workloads running in MapReduce environments benefits both the service providers in the cloud and users: the service provider can use this knowledge to make better scheduling decisions, while the user can learn what aspects of their jobs impact performance.
This paper analyzes 10- months of MapReduce logs from the M45 supercomputing cluster which Yahoo! made freely available to select universities for academic research. We characterize resource utilization patterns, job patterns, and sources of failures.
We use an instance-based learning technique that exploits temporal locality to predict job completion times from historical data and identify potential performance problems in our dataset.

======================================================

MOBILE COMPUTING, IEEE TRANSACTIONS ON
APRIL 2010 (VOL. 9 NO. 4)

ENERGY-EFFICIENT VOIP OVER WIRELESS LANS

ABSTRACT
Emerging dual-mode phones incorporate a Wireless LAN (WLAN) interface along with the traditional cellular interface. The additional benefits of the WLAN interface are, however, likely to be outweighed by its greater rate of energy consumption.
This is especially of concern when real-time applications, that result in continuous traffic, are involved. WLAN radios typically conserve energy by staying in sleep mode. With real-time applications like Voice over Internet Protocol (VoIP), this can be challenging since packets delayed above a threshold are lost. Moreover, the continuous nature of traffic makes it difficult for the radio to stay in the lower power sleep mode enough to reduce energy consumption significantly.
In this work, we propose the GreenCall algorithm to derive sleep/ wake-up schedules for the WLAN radio to save energy during VoIP calls while ensuring that application quality is preserved within acceptable levels of users. We evaluate GreenCall on commodity hardware and study its performance over diverse network paths and describe our experiences in the process.
We further extensively investigate the effect of different application parameters on possible energy savings through trace-based simulations. We show that, in spite of the interactive, real-time nature of voice, energy consumption during calls can be reduced by close to 80 percent in most instances.

INDEX TERMS
Voice over IP (VoIP), wireless LANs, energy consumption, portable communication devices, Internet.

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING
VOL. 9, NO. 4, APRIL 2010 527

ENERGY-OPTIMAL SCHEDULING WITH DYNAMIC CHANNEL ACQUISITION IN WIRELESS DOWNLINKS

ABSTRACT
We consider a wireless base station serving L users through L time-varying channels. It is well known that opportunistic scheduling algorithms with full channel state information (CSI) can stabilize the system with any data rates within the capacity region.
However, such opportunistic scheduling algorithms may not be energy efficient when the cost of channel acquisition is high and traffic rates are low. In particular, under the low traffic rate regime, it may be sufficient and more energy efficient to transmit data with no CSI, i.e., to transmit data blindly, since no power for channel acquisition is consumed.
In general, we show strategies that probe channels in every slot or never probe channels in any slot are not necessarily optimal, and we must consider mixed strategies. We derive a unified scheduling algorithm that dynamically chooses to transmit data with full or no CSI based on queue backlog and channel statistics.
Our methodology is general and can be naturally extended to include timing overhead due to channel acquisition, and to treat systems that allow any subset of channels to be measured. Through Lyapunov analysis, we show that the unified algorithm is throughput-optimal and stabilizes the downlink with optimal power consumption, balancing well between channel-aware and channel-blind transmission modes.

INDEX TERMS
Stochastic control, queuing analysis, optimization, partial channel state information.

======================================================
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
VOL. 14, NO. 1, FEBRUARY 2010 1

ANALYSIS OF COMPUTATIONAL TIME OF SIMPLE ESTIMATION OF DISTRIBUTION ALGORITHMS

ABSTRACT
Estimation of distribution algorithms (EDAs) are widely used in stochastic optimization. Impressive experimental results have been reported in the literature. However, little work has been done on analyzing the computation time of EDAs in relation to the problem size. It is still unclear how well EDAs (with a finite population size larger than two) will scale up when the dimension of the optimization problem (problem size) goes up.
This paper studies the computational time complexity of a simple EDA, i.e., the univariate marginal distribution algorithm (UMDA), in order to gain more insight into EDAs complexity. First, we discuss how to measure the computational time complexity of EDAs. A classification of problem hardness based on our discussions is then given.
Second, we prove a theorem related to problem hardness and the probability conditions of EDAs. Third, we propose a novel approach to analyzing the computational time complexity of UMDA using discrete dynamic systems and Chernoff bounds. Following this approach, we are able to derive a number of results on the first hitting time of UMDA on a well-known unimodal pseudo-boolean function, i.e., the LeadingOnes problem, and another problem derived from LeadingOnes, named BVLeadingOnes.
Although both problems are unimodal, our analysis shows that LeadingOnes is easy for the UMDA, while BVLeadingOnes is hard for the UMDA. Finally, in order to address the key issue of what problem characteristics make a problem hard for UMDA, we discuss in depth the idea of “margins” (or relaxation). We prove theoretically that the UMDA with margins can solve the BVLeadingOnes problem efficiently.

INDEX TERMS
Computational time complexity, estimation of distribution algorithms, first hitting time, heuristic optimization, univariate marginal distribution algorithms.

======================================================

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
JANUARY 2010 (VOL. 22 NO. 1)

ANONYMOUS QUERY PROCESSING

ABSTRACT
The increasing availability of location-aware mobile devices has given rise to a flurry of location-based services (LBSs). Due to the nature of spatial queries, an LBS needs the user position in order to process her requests.
On the other hand, revealing exact user locations to a (potentially untrusted) LBS may pinpoint their identities and breach their privacy. To address this issue, spatial anonymity techniques obfuscate user locations, forwarding to the LBS a sufficiently large region instead.
Existing methods explicitly target processing in the euclidean space and do not apply when proximity to the users is defined according to network distance (e.g., driving time through the roads of a city). In this paper, we propose a framework for anonymous query processing in road networks.
We design location obfuscation techniques that:
1) provide anonymous LBS access to the users and
2) allow efficient query processing at the LBS side. Our techniques exploit existing network database infrastructure, requiring no specialized storage schemes or functionalities.
We experimentally compare alternative designs in real road networks and demonstrate the effectiveness of our techniques.

======================================================
IEEE TRANSACTIONS ON FUZZY SYSTEMS
VOLUME 18, ISSUE 3 (JUNE 2010)

APPLYING THE POSSIBILISTIC C-MEANS
ALGORITHM IN KERNEL-INDUCED SPACES

ABSTRACT
In this paper, we study a kernel extension of the classic possibilistic c-means. In the proposed extension, we implicitly map input patterns into a possibly high-dimensional space by means of positive semidefinite kernels.
In this new space, we model the mapped data by means of the possibilistic clustering algorithm. We study in more detail the special case where we model the mapped data using a single cluster only, since it turns out to have many interesting properties. The modeled memberships in kernel-induced spaces yield a modeling of generic shapes in the input space.
We analyze in detail the connections to one-class support vector machines and kernel density estimation, thus, suggesting that the proposed algorithm can be used in many scenarios of unsupervised learning.
In the experimental part, we analyze the stability and the accuracy of the proposed algorithm on some synthetic and real datasets. The results show high stability and good performances in terms of accuracy

======================================================
CHECKPOINT-BASED FAULT-TOLERANT INFRASTRUCTURE FOR VIRTUALIZED SERVICE PROVIDERS

ABSTRACT
Crash and omission failures are common in service providers: a disk can break down or a link can fail anytime. In addition, the probability of a node failure increases with the number of nodes.
Apart from reducing the provider’s computation power and jeopardizing the fulfillment of his contracts, this can also lead to computation time wasting when the crash occurs before finishing the task execution. In order to avoid this problem, efficient checkpoint infrastructures are required, especially in virtualized environments where these infrastructures must deal with huge virtual machine images.
This paper proposes a smart checkpoint infrastructure for virtualized service providers. It uses Another Union File System to differentiate read-only from read-write parts in the virtual machine image. In this way, read-only parts can be checkpointed only once, while the rest of checkpoints must only save the modifications in read-write parts, thus reducing the time needed to make a checkpoint.
The checkpoints are stored in a Hadoop Distributed File System. This allows resuming a task execution faster after a node crash and increasing the fault tolerance of the system, since checkpoints are distributed and replicated in all the nodes of the provider.
This paper presents a running implementation of this infrastructure and its evaluation, demonstrating that it is an effective way to make faster checkpoints with low interference on task execution and efficient task recovery after a node failure.

======================================================

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
JULY 2010 (VOL. 22 NO. 7)

CLOSENESS: A NEW PRIVACY
MEASURE FOR DATA PUBLISHING

ABSTRACT
The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain “identifying” attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure.
The notion of -diversity has been proposed to address this; -diversity requires that each equivalence class has at least well-represented (in Section 2) values for each sensitive attribute. In this article, we show that -diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. Motivated by these limitations, we propose a new notion of privacy called “closeness”.
We first present the base model t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t).
We then propose a more flexible privacy model called (n, t)-closeness that offers higher utility. We describe our desiderata for designing a distance measure between two probability distributions and present two distance measures. We discuss the rationale for using closeness as a privacy measure and illustrate its advantages through examples and experiments.

INDEX TERMS
Privacy Preservation, Data Anonymization, Data Publishing, Data Security

======================================================
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
VOL. 9, NO. 6, JUNE 2010

CLUSTERING AND CLUSTER-BASED ROUTING PROTOCOL FOR DELAY-TOLERANT MOBILE NETWORKS

ABSTRACT
This research investigates distributed clustering scheme and proposes a cluster-based routing protocol for Delay- Tolerant Mobile Networks (DTMNs). The basic idea is to distributively group mobile nodes with similar mobility pattern into a cluster, which can then interchangeably share their resources (such as buffer space) for overhead reduction and load balancing, aiming to achieve efficient and scalable routing in DTMN.
Due to the lack of continuous communications among mobile nodes and possible errors in the estimation of nodal contact probability, convergence and stability become major challenges in distributed clustering in DTMN. To this end, an exponentially weighted moving average (EWMA) scheme is employed for on-line updating nodal contact probability, with its mean proven to converge to the true contact probability.
Based on nodal contact probabilities, a set of functions including Sync(), Leave(), and Join() are devised for cluster formation and gateway selection. Finally, the gateway nodes exchange network information and perform routing.
Extensive simulations are carried out to evaluate the effectiveness and efficiency of the proposed cluster-based routing protocol. The simulation results show that it achieves higher delivery ratio and significantly lower overhead and end-to-end delay compared with its non-clustering counterpart.

INDEX TERMS
Clustering, delay-tolerant networks, routing.

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING
VOL. 9, NO. 6, JUNE 2010

COMBINED AUTHENTICATION-BASED MULTILEVEL ACCESS CONTROL IN MOBILE APPLICATION FOR DAILY LIFE SERVICE

ABSTRACT
In current computing environments, collaborative computing has been a central concern in Ubiquitous, Convergent, and Social Computing. “MobiLife” and “MyLifeBits” are the leading projects for representing dailylifeservices and their systems require complicate and collaborative network systems.
The collaborative computing environments remain in high potential risks for users’ security and privacy because of diverse attack routes. In order to solve the problems, we design combined authentication and multilevel access control, which deals with cryptographic methods in a personal database of “MyLifeBits” system.
We propose a scheme which is flexible in dynamic access authorization changes, secure against all the attacks from various routes, a minimum round of protocol, privacy preserving access control, and multifunctional.

INDEX TERMS
Combined authentication, multilevel access control, integrated security management dailylifeservice, MyLifeBits, mobile phone, information classification, personal DB.

======================================================

IEEE/ACM TRANSACTIONS ON NETWORKING

CONDITIONAL SHORTEST PATH ROUTING IN DELAY TOLERANT NETWORKS

ABSTRACT
Delay tolerant networks are characterized by the sporadic connectivity between their nodes and therefore the lack of stable end-to-end paths from source to destination. Since the future node connections are mostly unknown in these networks, opportunistic forwarding is used to deliver messages.
However, making effective forwarding decisions using only the network characteristics (i.e. average intermeeting time between nodes) extracted from contact history is a challenging problem. Based on the observations about human mobility traces and the findings of previous work, we introduce a new metric called conditional intermeeting time, which computes the average intermeeting time between two nodes relative to a meeting with a third node using only the local knowledge of the past contacts.
We then look at the effects of the proposed metric on the shortest path based routing designed for delay tolerant networks. We propose Conditional Shortest Path Routing (CSPR) protocol that routes the messages over conditional shortest paths in which the cost of links between nodes is defined by conditional intermeeting times rather than the conventional intermeeting times.
Through trace-driven simulations, we demonstrate that CSPR achieves higher delivery rate and lower end-to-end delay compared to the shortest path based routing protocols that use the conventional intermeeting time as the link metric

======================================================
IEEE/ACM TRANSACTIONS ON NETWORKING
VOL. 18, NO. 2, APRIL 2010

CONSTRAINED RELAY NODE PLACEMENT IN WIRELESS SENSOR NETWORKS: FORMULATION AND APPROXIMATIONS

ABSTRACT
One approach to prolong the lifetime of a wireless sensor network (WSN) is to deploy some relay nodes to communicate with the sensor nodes, other relay nodes, and the base stations. The relay node placement problem for wireless sensor networks is concerned with placing a minimum number of relay nodes into a wireless sensor network to meet certain connectivity or survivability requirements.
Previous studies have concentrated on the unconstrained version of the problem in the sense that relay nodes can be placed anywhere. In practice, there may be some physical constraints on the placement of relay nodes. To address this issue, we study constrained versions of the relay node placement problem, where relay nodes can only be placed at a set of candidate locations.
In the connected relay node placement problem, we want to place a minimum number of relay nodes to ensure that each sensor node is connected with a base station through a bidirectional path. In the survivable relay node placement problem, we want to place a minimum number of relay nodes to ensure that each sensor node is connected with two base stations (or the only base station in case there is only one base station) through two node-disjoint bi-directional paths.
For each of the two problems, we discuss its computational complexity and present a framework of polynomial time approximation algorithms with small approximation ratios. Extensive numerical results showthat our approximation algorithms can produce solutions very close to optimal solutions.

INDEX TERMS
Approximation algorithms, connectivity and survivability, relay node placement, wireless sensor networks (WSNs).

======================================================
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
VOL. 21, NO. 2, FEBRUARY 2010 229

COOPERATIVE CACHING IN WIRELESS P2P NETWORKS: DESIGN, IMPLEMENTATION, AND EVALUATION

Jing Zhao, Student Member, IEEE, Ping Zhang,
Guohong Cao, Senior Member, IEEE, and Chita R. Das, Fellow, IEEE

ABSTRACT
Some recent studies have shown that cooperative cache can improve the system performance in wireless P2P networks such as ad hoc networks and mesh networks. However, all these studies are at a very high level, leaving many design and implementation issues unanswered.
In this paper, we present our design and implementation of cooperative cache in wireless P2P networks, and propose solutions to find the best place to cache the data. We propose a novel asymmetric cooperative cache approach, where the data requests are transmitted to the cache layer on every node, but the data replies are only transmitted to the cache layer at the intermediate nodes that need to cache the data.
This solution not only reduces the overhead of copying data between the user space and the kernel space, it also allows data pipelines to reduce the end-to-end delay. We also study the effects of different MAC layers, such as 802.11-based ad hoc networks and multi-interface-multichannel-based mesh networks, on the performance of cooperative cache.
Our results show that the asymmetric approach outperforms the symmetric approach in traditional 802.11-based ad hoc networks by removing most of the processing overhead. In mesh networks, the asymmetric approach can significantly reduce the data access delay compared to the symmetric approach due to data pipelines.

INDEX TERMS
Wireless networks, P2P networks, cooperative cache.

======================================================

IEEE TRANSACTIONS ON SIGNAL PROCESSING
VOL. 58, NO. 3, MARCH 2010

COVARIANCE ESTIMATION IN DECOMPOSABLE GAUSSIAN GRAPHICAL MODELS

ABSTRACT
Graphical models are a framework for representing and exploiting prior conditional independence structures within distributions using graphs. In the Gaussian case, these models are directly related to the sparsity of the inverse covariance (concentration) matrix and allow for improved covariance estimation with lower computational complexity.
We consider concentration estimation with the mean-squared error (MSE) as the objective, in a special type of model known as decomposable. This model includes, for example, the well known banded structure and other cases encountered in practice.
Our first contribution is the derivation and analysis of the minimum variance unbiased estimator (MVUE) in decomposable graphical models. We provide a simple closed form solution to the MVUE and compare it with the classical maximum likelihood estimator (MLE) in terms of performance and complexity. Next, we extend the celebrated Stein’s unbiased risk estimate (SURE) to graphical models.
Using SURE, we prove that the MSE of the MVUE is always smaller or equal to that of the biased MLE, and that the MVUE itself is dominated by other approaches. In addition, we propose the use of SURE as a constructive mechanism for deriving new covariance estimators.
Similarly to the classical MLE, all of our proposed estimators have simple closed form solutions but result in a significant reduction in MSE.

INDEX TERMS
Covariance estimation, graphical models, minimum variance unbiased estimation.

======================================================

MOBILE COMPUTING, IEEE TRANSACTIONS ON
ISSUE DATE: APRIL 2010, VOLUME: 9 ISSUE:4

DCAR: DISTRIBUTED CODING-AWARE ROUTING IN WIRELESS NETWORKS

ABSTRACT
Recently, there has been a growing interest of using network coding to improve the performance of wireless networks, for example, authors of proposed the practical wireless network coding system called COPE, which demonstrated the throughput gain achieved by network coding.
However, COPE has two fundamental limitations:
(1) the coding opportunity is crucially dependent on the established routes and
(2) the coding structure in COPE is limited within a two-hop region only.
The aim of this paper is to overcome these limitations. In particular, we propose DCAR, the distributed coding-aware routing mechanism which enables:
(1) the discovery for available paths between a given source and destination and
(2) the detection for potential network coding opportunities over much wider network region. One interesting result is that DCAR has the capability to discover high throughput paths with coding opportunities, while conventional wireless network routing protocols fail to do so. In addition, DCAR can detect coding opportunities on the entire path, thus eliminating the Â¿two-hopÂ¿ coding limitation in COPE.
We also propose a novel routing metric called coding-aware routing metric (CRM) which facilitates the performance comparison between Â¿coding-possibleÂ¿ and "coding-impossibleÂ¿ paths. We implement the DCAR system in ns-2 and carry out extensive evaluation. We show that when comparing to the coding mechanism in, DCAR can achieve much higher throughput gain.

======================================================

IEEE/ACM TRANSACTIONS ON NETWORKING
VOL. 18, NO. 2, APRIL 2010

DEMAND-AWARE CONTENT DISTRIBUTION ON THE INTERNET

ABSTRACT
The rapid growth of media content distribution on the Internet in the past few years has brought with it commensurate increases in the costs of distributing that content. Can the content distributor defray these costs through a more innovative approach to distribution? In this paper, we evaluate the benefits of a hybrid system that combines peer-to-peer and a centralized client–server approach against each method acting alone.
A key element of our approach is to explicitly model the temporal evolution of demand. In particular, we employ a word-of-mouth demand evolution model due to Bass [2] to represent the evolution of interest in a piece of content. Our analysis is carried out in an order scaling depending on the total potential mass of customers in the market.
Using this approach, we study the relative performance of peer-to-peer and centralized client–server schemes, as well as a hybrid of the two—both from the point of view of consumers as well as the content distributor.
We show how awareness of demand can be used to attain a given average delay target with lowest possible utilization of the central server by using the hybrid scheme. We also show how such awareness can be used to take provisioning decisions. Our insights are obtained in a fluid model and supported by stochastic simulations.

Index Terms
Bass diffusion, content distribution, delay guarantees, peer-to-peer (P2P).

======================================================

IEEE WIRELESS COMMUNICATIONS
VOLUME 17, ISSUE 2 (APRIL 2010)

DESIGN ISSUES AND EXPERIMENTAL STUDIES OF WIRELESS LAN MESH

ABSTRACT
Wireless mesh networking, as a low-cost and reliable technology for rapid network deployment, has attracted considerable attention from academia and standardization in the industry.
The IEEE 802.11s standard defines a wireless LAN mesh using the IEEE 802.11 medium access control and physical layers, and is one of the most active standards with increasing commercial opportunities. This study presents the design and development of a WLAN mesh system conforming to the latest IEEE 802.11s draft amendment. Without costly hardware modifications, the proposed solution is a pure software extension for commercial off-the-shelf WLAN chipsets.
This study constructs an experimental testbed, and evaluates issues such as the transmission reliability of mesh broadcast-type control messages and multichannel transmissions. Experimental results demonstrate that the delivery of mesh broadcast-type control messages, such as routing construction frames, using the multiple acknowledged unicast scheme improves mesh stability from an 86 to a 98 percent success ratio in a 16-node grid.
Transmitting packets using a single radio interface switching between multiple channels reduces inter-flow interference and doubles the throughput in our testbed.

======================================================

IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
VOL. 59, NO. 2, FEBRUARY 2010

DISTANCE DISTRIBUTIONS IN FINITE UNIFORMLY RANDOM NETWORKS: THEORY AND APPLICATIONS

ABSTRACT
In wireless networks, knowledge of internode distances is essential for performance analysis and protocol design. When determining distance distributions in random networks, the underlying nodal arrangement is almost universally taken to be a stationary Poisson point process.
While this may be a good approximation in some cases, there are also certain shortcomings to this model, such as the fact that, in practical networks, the number of nodes in disjoint areas is not independent.
This paper considers a more-realistic network model where a known and fixed number of nodes are independently distributed in a given region and characterizes the distribution of the Euclidean internode distances.
The key finding is that, when the nodes are uniformly randomly placed inside a ball of arbitrary dimensions, the probability density function (pdf) of the internode distances follows a generalized beta distribution. This result is applied to study wireless network characteristics such as energy consumption, interference, outage, and connectivity.

INDEX TERMS
Binomial point process, interference, internode distances, outage, Poisson point process, wireless networks.

======================================================

NETWORKING, IEEE/ACM TRANSACTIONS ON
ISSUE DATE: APRIL 2010

DISTRIBUTED ALGORITHMS FOR MINIMUM COST MULTICAST WITH NETWORK CODING

ABSTRACT
Network coding techniques are used to find the minimum-cost transmission scheme for multicast sessions with or without elastic rate demand. It is shown that in wireline networks, solving for the optimal coding subgraphs in network coding is equivalent to finding the optimal routing scheme in a multicommodity flow problem.
A set of node-based distributed gradient projection algorithms are designed to jointly implement congestion control/routing at the source node and Â¿virtualÂ¿ routing at intermediate nodes. The analytical framework and distributed algorithms are further extended to interference-limited wireless networks where link capacities are functions of the signal-to-interference-plus-noise ratio (SINR).
To achieve minimum-cost multicast in this setting, the transmission powers of links must be jointly optimized with coding subgraphs and multicast input rates. Node-based power allocation and power control algorithms are developed for the power optimization.
The power algorithms, when iterated in conjunction with the congestion control and routing algorithms, converge to the jointly optimal multicast configuration. The scaling matrices required in the gradient projection algorithms are explicitly derived and are shown to guarantee fast convergence to the optimum from any initial condition.

======================================================
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT
VOL. 7, NO. 3, SEPTEMBER 2010 1

DISTRIBUTED AUTO-CONFIGURATION OF NEIGHBORING CELL GRAPHS IN RADIO ACCESS NETWORKS

ABSTRACT
In order to execute a handover processes in a GSM or UMTS Radio Access Network, each cell has a list of neighbors to which such handovers may be made. Today, these lists are statically configured during network planning, which does not allow for dynamic adaptation of the network to changes and unexpected events such as a cell failure.
This paper advocates an autonomic, decentralized approach to dynamically configure neighboring cell lists. The main contribution of this work is a novel protocol, called DOC, which detects and continuously tracks the coverage overlaps among cells. The protocol executes on a spanning tree where the nodes are radio base stations and the links represent communication channels. Over this tree, nodes periodically exchange information about terminals that are in their respective coverage area.
Bloom filters are used for efficient representations of terminal sets and efficient set operations. The protocol aggregates Bloom filters to reduce the communication overhead and also for routing messages along the tree. Using simulation, we study the system in steady state, when a base station is added or a base station fails, and also during the initialization phase where the system self-configures.

INDEX TERMS
Self-management, distributed management, auto-configuration, radio access networks, UMTS

======================================================

DYNAMIC AUCTION MECHANISM FOR CLOUD RESOURCE ALLOCATION

ABSTRACT
We propose a dynamic auction mechanism to solve the allocation problem of computation capacity in the environment of cloud computing. Truth-telling property holds when we apply a second-priced auction mechanism into the resource allocation problem.
Thus, the cloud service provider (CSP) can assure reasonable profit and efficient allocation of its computation resources. In the cases that the number of users and resources are large enough, potential problems in second-priced auction mechanism, including the variation of revenue, will not be weighted seriously since the law of large number holds in this case.

======================================================

IEEE Tranactions in Vehicular Technology
Jan. 2010.

DYNAMIC SPECTRUM LEASING (DSL): A NEW PARADIGM FOR SPECTRUM SHARING IN COGNITIVE RADIO NETWORKS
ABSTRACT
Recently [1] proposed a dynamic spectrum leasing (DSL) paradigm for dynamic spectrum access in cognitive radio networks. In this paper, we formalize this concept by developing a more general game theoretic framework for the dynamic spectrum leasing and by carefully identifying requirements for the coexistence of primary and secondary systems via spectrum leasing.
In contrast to hierarchical spectrum access, spectrum owners in proposed dynamic spectrum leasing networks, denoted as primary users, dynamically adjust the amount of secondary interference they are willing to tolerate in response to the demand from secondary transmitters.
The secondary transmitters in turn attempt to achieve maximum possible throughput, or other suitably defined reward, opportunistically while not violating the interference limit set by the primary users. The new game-theoretic model, however, allows the secondary users to encourage the spectrum owners to push the interference cap upward based on demand. We have proposed a general structure for utility functions of primary users and the secondary users that allows the primary users to control the price and demand for spectrum access based on their required Quality-of-Service (QoS).
We establish that with these utility functions the DSL game has a unique Nash equilibrium to which the best- response adaptation finally converges. Moreover, it is shown that the proposed coexistence and best-response adaptations can be achieved without having any significant interaction between the two systems.
In fact, it is shown that the only requirement is that the primary system periodically broadcasts two parameter values. We use several examples to illustrate the system behavior at the equilibrium, and use the performance at the equilibrium to identify suitable system design parameters.

INDEX TERMS
Cognitive radios, DSL, dynamic spectrum access, dynamic spectrum leasing, dynamic spectrum sharing, game theory, power control.

======================================================
IEEE TRANSACTIONS ON COMPUTERS
VOL. 59, NO. XX, XXXXXXX 2010

EFFICIENT MICROARCHITECTURAL VULNERABILITIES PREDICTION USING BOOSTED REGRESSION TREES AND PATIENT RULE INDUCTIONS
ABSTRACT
The shrinking processor feature size, lower threshold voltage, and increasing clock frequency make modern processors highly vulnerable to transient faults. Architectural Vulnerability Factor (AVF) reflects the possibility that a transient fault eventually causes a visible error in the program output, and it indicates a system’s susceptibility to transient faults. Therefore, the awareness of the AVF, especially at early design stage, is greatly helpful to achieve a trade-off between system performance and reliability.
However, tracking the AVF during program execution is extremely costly, which makes accurate AVF prediction extraordinarily attractive to computer architects. In this paper, we propose to use Boosted Regression Trees (BRT), a nonparametric tree-based predictive modeling scheme, to identify the correlation across workloads, execution phases, and processor configurations between a key processor structure’s AVF and various performance metrics.
The proposed method not only makes an accurate prediction but also quantitatively illustrates individual performance variable’s importance to the AVF. A quantitative comparison between our model and conventional linear regression is performed in terms of model stability, showing that our model is more stable when the model size varies.
Moreover, to reduce the prediction complexity, we also utilize a technique named Patient Rule Induction Method (PRIM) to extract some simple selecting rules on important metrics. Applying these rules during runtime can fast identify execution intervals with a relatively high AVF. A case study that enables PRIM-based ROB redundancy has been performed to demonstrate a possible application of the trained PRIM rules.

INDEX TERMS
Hardware reliability, modeling and prediction, modeling of computer architecture.

======================================================

ELASTIC SITE - USING CLOUDS TO ELASTICALLY EXTEND SITE RESOURCES

ABSTRACT
Infrastructure-as-a-Service (IaaS) cloud computing offers new possibilities to scientific communities. One of the most significant is the ability to elastically provision and relinquish new resources in response to changes in demand.
In our work, we develop a model of an “elastic site” that efficiently adapts services provided within a site, such as batch schedulers, storage archives, or Web services to take advantage of elastically provisioned resources.
We describe the system architecture along with the issues involved with elastic provisioning, such as security, privacy, and various logistical considerations. To avoid over- or under-provisioning the resources we propose three different policies to efficiently schedule resource deployment based on demand.
We have implemented a resource manager, built on the Nimbus toolkit to dynamically and securely extend existing physical clusters into the cloud. Our elastic site manager interfaces directly with local resource managers, such as Torque. We have developed and evaluated policies for resource provisioning on a Nimbus-based cloud at the University of Chicago, another at Indiana University, and Amazon EC2.
We demonstrate a dynamic and responsive elastic cluster, capable of responding effectively to a variety of job submission patterns. We also demonstrate that we can process 10 times faster by expanding our cluster up to 150 EC2 nodes.

======================================================

MOBILE COMPUTING, IEEE TRANSACTIONS ON
ISSUE DATE: MARCH 2010, VOLUME: 9 ISSUE:3

ENABLING EFFICIENT PEER-TO-PEER RESOURCE SHARING IN WIRELESS MESH NETWORKS

ABSTRACT
Wireless mesh networks are a promising area for the deployment of new wireless communication and networking technologies. In this paper, we address the problem of enabling effective peer-to-peer resource sharing in this type of networks.
Starting from the well-known Chord protocol for resource sharing in wired networks, we propose a specialization that accounts for peculiar features of wireless mesh networks: namely, the availability of a wireless infrastructure, and the 1-hop broadcast nature of wireless communication, which bring to the notions of location awareness and MAC layer cross-layering.
Through extensive packet-level simulations, we investigate the separate effects of location awareness and MAC layer cross-layering, and of their combination, on the performance of the P2P application. The combined protocol, MeshChord, reduces message overhead of as much as 40 percent with respect to the basic Chord design, while at the same time improving the information retrieval performance.
Notably, differently from the basic Chord design, our proposed MeshChord specialization displays information retrieval performance resilient to the presence of both CBR and TCP background traffic.
Overall, the results of our study suggest that MeshChord can be successfully utilized for implementing file/resource sharing applications in wireless mesh networks

======================================================

ENERGY EFFICIENT RESOURCE MANAGEMENT IN VIRTUALIZED CLOUD DATA CENTERS

ABSTRACT
Rapid growth of the demand for computational power by scientific, business and web-applications has led to the creation of large-scale data centers consuming enormous amounts of electrical power.
We propose an energy efficient resource management system for virtualized Cloud data centers that reduces operational costs and provides required Quality of Service (QoS).
Energy savings are achieved by continuous consolidation of VMs according to current utilization of resources, virtual network topologies established between VMs and thermal state of computing nodes.
We present first results of simulation-driven evaluation of heuristics for dynamic reallocation of VMs using live migration according to current requirements for CPU performance.
The results show that the proposed technique brings substantial energy savings, while ensuring reliable QoS. This justifies further investigation and development of the proposed resource management system

======================================================

ENERGY-AWARE TAG
ANTI-COLLISION PROTOCOLS
FOR RFID SYSTEMS

ABSTRACT
Energy consumption of mobile readers is becoming an important issue as applications of RFID systems pervade different aspects of our lives. Surprisingly, however, these systems are not energy-aware with the focus till date being on reducing the time to read all tags by the reader.
The problem of tag arbitration in RFID systems is considered with the aim of trading off time for energy savings at the reader. The approach of using multiple time slots per node of a binary search tree is explored through three anticollision protocols that aim to reduce the number of colliding responses from tags.
This results in fewer reader queries and tag responses and, hence, energy savings at both the reader and tags (if they are active tags). An analytical framework is developed to predict the performance of our protocols, with the numerical evaluation of this framework validated through simulation.
It is shown that all three protocols provide significant energy savings when compared to the existing Query Tree protocol while sharing the deterministic and memoryless properties of the latter.

======================================================
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
VOL. 21, NO. 5, MAY 2010

ENERGY-EFFICIENT PROTOCOL FOR DETERMINISTIC AND PROBABILISTIC COVERAGE IN SENSOR NETWORKS

ABSTRACT
Various sensor types, e.g., temperature, humidity, and acoustic, sense physical phenomena in different ways, and thus, are expected to have different sensing models. Even for the same sensor type, the sensing model may need to be changed in different environments. Designing and testing a different coverage protocol for each sensing model is indeed a costly task.
To address this challenging task, we propose a new probabilistic coverage protocol (denoted by PCP) that could employ different sensing models. We show that PCP works with the common disk sensing model as well as probabilistic sensing models, with minimal changes.
We analyze the complexity of PCP and prove its correctness. In addition, we conduct an extensive simulation study of large-scale sensor networks to rigorously evaluate PCP and compare it against other deterministic and probabilistic protocols in the literature.
Our simulation demonstrates that PCP is robust, and it can function correctly in presence of random node failures, inaccuracies in node locations, and imperfect time synchronization of nodes. Our comparisons with other protocols indicate that PCP outperforms them in several aspects, including number of activated sensors, total energy consumed, and network lifetime.

INDEX TERMS
Sensor networks, coverage in sensor networks, probabilistic coverage, coverage protocols.

======================================================

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS

PERTURBATION ANALYSIS FOR SPECTRUM SHARING IN COGNITIVE RADIO NETWORKS

ABSTRACT
A primary ad-hoc network working in parallel with a secondary ad-hoc network is considered. The main challenge in operating cognitive ad-hoc networks is the lack of a centralized controller performing resource allocation for different users in the network.
In this paper, a distributed power allocation scheme is considered for secondary users and its performance is analyzed when time average channel gains are substituted for instantaneous channel gains.
In this way, it is not necessary to exchange instantaneous channel information; however, users’ allocated power will be perturbed. It is of interest to analyze mathematically this perturbation and to show how it affects the network performance.
In particular, an upper bound on perturbation of each user’s allocated power, rate, and interference caused to a primary receivers by the secondary users is obtained. Then, it is shown that how this perturbation affects the transmission rate and the probability of interference constraint violation by the secondary users.

======================================================

IEEE/ACM TRANSACTIONS ON NETWORKING

ENGINEERING WIRELESS MESH NETWORKS: JOINT SCHEDULING, ROUTING, POWER CONTROL, AND RATE ADAPTATION

ABSTRACT
We present a number of significant engineering insights on what makes a good configuration for medium- to large size wireless mesh networks (WMNs) when the objective function is to maximize the minimum throughput among all flows.
For this, we first develop efficient and exact computational tools using column generation with greedy pricing that allow us to compute exact solutions for networks significantly larger than what has been possible so far.
We also develop very fast approximations that compute nearly optimal solutions for even larger cases. Finally, we adapt our tools to the case of proportional fairness and show that the engineering insights are very similar.

Index Terms
Column generation, power control, rate adaptation, routing, scheduling, wireless mesh networks (WMNs).

======================================================

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
VOL. 9, NO. 4, APRIL 2010

ENHANCING CELL-EDGE PERFORMANCE: A DOWNLINK DYNAMIC INTERFERENCE AVOIDANCE SCHEME WITH INTER-CELL COORDINATION

ABSTRACT
Interference management has been a key concept for designing future high data-rate wireless systems that are required to employ dense reuse of spectrum. Static or semistatic interference coordination based schemes provide enhanced cell-edge performance but with severe penalty to the overall cell throughput. Furthermore, static resource planning makes these schemes unsuitable for applications in which frequency planning is difficult, such as femtocell networks.
In this paper, we present a novel dynamic interference avoidance scheme that makes use of inter-cell coordination in order to prevent excessive inter-cell interference, especially for cell or sector edge users that are most affected by inter-cell interference, with minimal or no impact on the network throughput.
The proposed scheme is comprised of a two-level algorithm - one at the base station level and the other at a central controller to which a group of neighboring base stations are connected.
Simulation results show that the proposed scheme outperforms the reference schemes, in which either coordination is not employed (reuse of 1) or employed in a static manner (reuse of 3 and fractional frequency reuse), in terms of cell edge throughput with a minimal impact on the network throughput and with some increase in complexity.

INDEX TERMS
OFDMA resource allocation, interference avoidance, resource optimization, inter-cell coordination.

======================================================

NETWORKING, IEEE/ACM TRANSACTIONS ON
ISSUE DATE: JUNE 2010

EQUILIBRIUM OF HETEROGENEOUS CONGESTION CONTROL: OPTIMALITY AND STABILITY

ABSTRACT
When heterogeneous congestion control protocols that react to different pricing signals share the same network, the current theory based on utility maximization fails to predict the network behavior. The pricing signals can be different types of signals such as packet loss, queueing delay, etc, or different values of the same type of signal such as different ECN marking values based on the same actual link congestion level.
Unlike in a homogeneous network, the bandwidth allocation now depends on router parameters and flow arrival patterns. It can be non-unique, suboptimal and unstable. In Tang (“Equilibrium of heterogeneous congestion control: Existence and uniqueness,” IEEE/ACM Trans. Netw., vol. 15, no. 4, pp. 824–837, Aug. 2007), existence and uniqueness of equilibrium of heterogeneous protocols are investigated.
This paper extends the study with two objectives: analyzing the optimality and stability of such networks and designing control schemes to improve those properties.
First, we demonstrate the intricate behavior of a heterogeneous network through simulations and present a framework to help understand its equilibrium properties.
Second, we propose a simple source-based algorithm to decouple bandwidth allocation from router parameters and flow arrival patterns by only updating a linear parameter in the sources' algorithms on a slow timescale. It steers a network to the unique optimal equilibrium.
The scheme can be deployed incrementally as the existing protocol needs no change and only new protocols need to adopt the slow timescale adaptation

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING
VOL. 9, NO. 5, MAY 2010 609

EXPLOITING AND DEFENDING OPPORTUNISTIC SCHEDULING IN CELLULAR DATA NETWORKS

ABSTRACT
Third Generation (3G) cellular networks take advantage of time-varying and location-dependent channel conditions of mobile users to provide broadband services. Under fairness and QoS constraints, they use opportunistic scheduling to efficiently utilize the available spectrum.
Opportunistic scheduling algorithms rely on the collaboration among all mobile users to achieve their design objectives. However, we demonstrate that rogue cellular devices can exploit vulnerabilities in popular opportunistic scheduling algorithms, such as Proportional Fair (PF) and Temporal Fair (TF), to usurp the majority of time slots in 3G networks.
Our simulations show that under realistic conditions, only five rogue device per 50-user cell can capture up to 95 percent of the time slots, and can cause 2-second end-to-end interpacket transmission delay on VoIP applications for every user in the same cell, rendering VoIP applications useless.
To defend against this attack, we propose strengthening the PF and TF schedulers and a robust handoff scheme.

INDEX TERMS
Security, opportunistic scheduling, proportional fair, temporal fair, handoff.

======================================================

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
VOL. 9, NO. 5, MAY 2010

FAIRNESS-AWARE RADIO RESOURCE MANAGEMENT IN DOWNLINK OFDMA CELLULAR RELAY NETWORKS

ABSTRACT
Relaying and orthogonal frequency division multiple access (OFDMA) are the accepted technologies for emerging wireless communications standards. The activities in many wireless standardization bodies and forums, for example IEEE 802.16 j/m and LTE-Advanced, attest to this fact.
The availability or lack thereof of efficient radio resource management (RRM) could make or mar the opportunities in these networks. Although distributed schemes are more attractive, it is essential to seek outstanding performance benchmarks to which various decentralized schemes can be compared.
Therefore, this paper provides a comprehensive centralized RRM algorithm for downlink OFDMA cellular fixed relay networks in a way to ensure user fairness with minimal impact on network throughput. In contrast, it has been observed that pure opportunistic schemes and fairness-aware schemes relying solely on achievable and allocated capacities may not attain the desired fairness, e.g., proportional fair scheduling.
The proposed scheme is queueaware and performs three functions jointly; dynamic routing, fair scheduling, and load balancing among cell nodes. We show that the proposed centralized scheme is different from the traditional centralized schemes in terms of the substantial savings in complexity and feedback overhead.

INDEX TERMS
RRM, OFDMA, relaying, routing, scheduling, fairness, load balancing, proportional fairness.

======================================================

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
VOL. 22, NO. 5, MAY 2010 651

FALSE NEGATIVE PROBLEM OF
COUNTING BLOOM FILTER

ABSTRACT
Bloom filter is effective, space-efficient data structure for concisely representing a data set and supporting approximate membership queries. Traditionally, researchers often believe that it is possible that a Bloom filter returns a false positive, but it will never return a false negative under well-behaved operations.
By investigating the mainstream variants, however, we observe that a Bloom filter does return false negatives in many scenarios. In this work, we show that the undetectable incorrect deletion of false positive items and detectable incorrect deletion of multiaddress items are two general causes of false negative in a Bloom filter.
We then measure the potential and exposed false negatives theoretically and practically. Inspired by the fact that the potential false negatives are usually not fully exposed, we propose a novel Bloom filter scheme, which increases the ratio of bits set to a value larger than one without decreasing the ratio of bits set to zero.
Mathematical analysis and comprehensive experiments show that this design can reduce the number of exposed false negatives as well as decrease the likelihood of false positives.
To the best of our knowledge, this is the first work dealing with both the false positive and false negative problems of Bloom filter systematically when supporting standard usages of item insertion, query, and deletion operations.

INDEX TERMS
Bloom filter, false negative, multichoice counting Bloom filter.

======================================================

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
VOL. XX, NO. XX, MONTH 2010 1

FAST ALGORITHMS FOR JOINT POWER CONTROL AND SCHEDULING IN WIRELESS NETWORKS

ABSTRACT
This paper studies the problem of finding a minimum-length schedule of a power-controlled wireless network subject to traffic demands and SINR (signal-to-interferenceplus- noise ratio) constraints. We propose a column generation based algorithm that finds the optimal schedules and transmit powers.
The column generation method decomposes a complex linear optimization problem into a restricted master problem and a pricing problem. We develop a new formulation of the pricing problem using the Perron-Frobenius eigenvalue condition, which enables us to integrate link scheduling with power control in a single framework.
This new formulation reduces the complexity of the pricing problem, and thus improves the overall efficiency of the column generation method significantly – for example, the average runtime is reduced by 99.86% in 18- link networks compared with the traditional column generation method. Furthermore, we propose a branch-and-price method that combines column generation with the branch-and-bound technique to tackle the integer constraints on time slot allocation.
We develop a new branching rule in the branch-and-price method that maintains the size of the pricing problem after each branching. Our branch-and-price method can obtain optimal integer solutions efficiently – for example, the average runtime is reduced by 99.72% in 18-link networks compared with the traditional branch-and-price method.
We further suggest efficient heuristic algorithms based on the structure of the optimal algorithms. Simulation results show that the heuristic algorithms can reach solutions within 10% of optimality for networks with less than 30 links.

INDEX TERMS
Scheduling, power control, SINR constraints, column generation, branch-and-price.

======================================================

NETWORKING, IEEE/ACM TRANSACTIONS ON

FAST ALGORITHMS FOR RESOURCE ALLOCATION IN WIRELESS CELLULAR NETWORKS

ABSTRACT
We consider a scheduled orthogonal frequency division multiplexed (OFDM) wireless cellular network where the channels from the base-station to the $n$ mobile users undergo flat fading. Spectral resources are to be divided among the users in order to maximize total user utility.
We show that this problem can be cast as a nonlinear convex optimization problem, and describe an $O(n)$ algorithm to solve it. Computational experiments show that the algorithm typically converges in around 25 iterations, where each iteration has a cost that is $O(n)$, with a modest constant.
When the algorithm starts from an initial resource allocation that is close to optimal, convergence typically takes even fewer iterations. Thus, the algorithm can efficiently track the optimal resource allocation as the channel conditions change due to fading.
We also show how our techniques can be extended to solve resource allocation problems that arise in wideband networks with frequency selective fading and when the utility of a user is also a function of the resource allocations in the past.

======================================================
IEEE TRANSACTIONS ON PATTERN ANALYSIS
AND MACHINE INTELLIGENCE
ISSUE DATE: JAN. 2010

FASTER AND BETTER: A MACHINE
LEARNING APPROACH TO CORNER DETECTION

ABSTRACT
The repeatability and efficiency of a corner detector determines how likely it is to be useful in a real-world application. The repeatability is important because the same scene viewed from different positions should yield features which correspond to the same real-world 3D locations.
The efficiency is important because this determines whether the detector combined with further processing can operate at frame rate. Three advances are described in this paper. First, we present a new heuristic for feature detection and, using machine learning, we derive a feature detector from this which can fully process live PAL video using less than 5 percent of the available processing time.
By comparison, most other detectors cannot even operate at frame rate (Harris detector 115 percent, SIFT 195 percent). Second, we generalize the detector, allowing it to be optimized for repeatability, with little loss of efficiency.
Third, we carry out a rigorous comparison of corner detectors based on the above repeatability criterion applied to 3D scenes. We show that, despite being principally constructed for speed, on these stringent tests, our heuristic detector significantly outperforms existing feature detectors.
Finally, the comparison demonstrates that using machine learning produces significant improvements in repeatability, yielding a detector that is both very fast and of very high quality.

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING
VOL. 9, NO. 5, MAY 2010

FAULT-TOLERANT RELAY NODE PLACEMENT IN HETEROGENEOUS WIRELESS SENSOR NETWORKS

ABSTRACT
Existing work on placing additional relay nodes in wireless sensor networks to improve network connectivity typically assumes homogeneous wireless sensor nodes with an identical transmission radius.
In contrast, this paper addresses the problem of deploying relay nodes to provide fault tolerance with higher network connectivity in heterogeneous wireless sensor networks, where sensor nodes possess different transmission radii.
Depending on the level of desired fault tolerance, such problems can be categorized as:
1) full fault-tolerant relay node placement, which aims to deploy a minimum number of relay nodes to establish kðk 1Þ vertexdisjoint paths between every pair of sensor and/or relay nodes and
2) partial fault-tolerant relay node placement, which aims to deploy a minimum number of relay nodes to establish kðk 1Þ vertex-disjoint paths only between every pair of sensor nodes.
Due to the different transmission radii of sensor nodes, these problems are further complicated by the existence of two different kinds of communication paths in heterogeneous wireless sensor networks, namely, two-way paths, along which wireless communications exist in both directions; and one-way paths, along which wireless communications exist in only one direction.
Assuming that sensor nodes have different transmission radii, while relay nodes use the same transmission radius, this paper comprehensively analyzes the range of problems introduced by the different levels of fault tolerance (full or partial) coupled with the different types of path (one-way or twoway).
Since each of these problems is NP-hard, we develop Oð k2Þ-approximation algorithms for both one-way and two-way partial fault-tolerant relay node placement, as well as Oð k3Þ-approximation algorithms for both one-way and two-way full fault-tolerant relay node placement ( is the best performance ratio of existing approximation algorithms for finding a minimum k-vertex connected spanning graph).
To facilitate the applications in higher dimensions, we also extend these algorithms and derive their performance ratios in d-dimensional heterogeneous wireless sensor networks ðd 3Þ. Finally, heuristic implementations of these algorithms are evaluated via QualNet simulations.

INDEX TERMS
Heterogeneous wireless sensor networks, relay node placement, approximation algorithms.

======================================================

FUZZY KEYWORD SEARCH OVER ENCRYPTED DATA IN CLOUD COMPUTING

ABSTRACT
As Cloud Computing becomes prevalent, more and more sensitive information are being centralized into the cloud. For the protection of data privacy, sensitive data usually have to be encrypted before outsourcing, which makes effective data utilization a very challenging task. Although traditional searchable encryption schemes allow a user to securely search over encrypted data through keywords and selectively retrieve files of interest, these techniques support only exact keyword search.
That is, there is no tolerance of minor typos and format inconsistencies which, on the other hand, are typical user searching behavior and happen very frequently. This significant drawback makes existing techniques unsuitable in Cloud Computing as it greatly affects system usability, rendering user searching experiences very frustrating and system efficacy very low.
In this paper, for the first time we formalize and solve the problem of effective fuzzy keyword search over encrypted cloud data while maintaining keyword privacy. Fuzzy keyword search greatly enhances system usability by returning the matching files when users’ searching inputs exactly match the predefined keywords or the closest possible matching files based on keyword similarity semantics, when exact match fails.
In our solution, we exploit edit distance to quantify keywords similarity and develop an advanced technique on constructing fuzzy keyword sets, which greatly reduces the storage and representation overheads. Through rigorous security analysis, we show that our proposed solution is secure and privacy-preserving, while correctly realizing the goal of fuzzy keyword search.

======================================================

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
VOL. 9, NO. 2, FEBRUARY 2010

GENERALIZED MIMO TRANSMIT PREPROCESSING USING PILOT SYMBOL ASSISTED RATELESS CODES

ABSTRACT
In this paper, we propose a generalized multipleinput multiple-output (MIMO) transmit preprocessing system, where both the channel coding and the linear MIMO transmit precoding components exploit the knowledge of the channel.
This was achieved by exploiting the inherently flexible nature of a specific family of rateless codes that are capable of modifying their code-rate as well as their degree distribution based on the channel state information (CSI), in an attempt to adapt to the time-varying nature of the channel. Moreover, we also propose a novel technique, hereby referred to as pilot symbol assisted rateless (PSAR) coding, where a predetermined fraction of binary pilot symbols is interspersed with the channel-coded bits at the channel coding stage, instead of multiplexing the pilots with the data symbols at the modulation stage, as in classic pilot symbol assisted modulation (PSAM).
We will subsequently demonstrate that the PSAR code-aided transmit preprocessing scheme succeeds in gleaning more beneficial knowledge from the inserted pilots, because the pilot bits are not only useful for estimating the channel at the receiver, but they are also beneficial in terms of significantly reducing the computational complexity of the rateless channel decoder.
Our results suggest that more than a 30% reduction in the decoder’s computational complexity can be attained by the proposed system, when compared to a corresponding benchmarker scheme having the same pilot overhead but using the PSAM technique.

INDEX TERMS
Generalized transmit preprocessing, MIMO, pilot symbol assisted rateless codes, rateless codes, complexity reduction, pilot symbol assisted modulation.

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING

HIGH RATE UNCORRELATED BIT EXTRACTION
FOR SHARED SECRET KEY GENERATION
FROM CHANNEL MEASUREMENTS

ABSTRACT
Secret keys can be generated and shared between two wireless nodes by measuring and encoding radio channel characteristics without ever revealing the secret key to an eavesdropper at a third location.
This paper addresses bit extraction, i.e., the extraction of secret key bits from noisy radio channel measurements at two nodes such that the two secret keys reliably agree.
Problems include
(1) non-simultaneous directional measurements,
(2) correlated bit streams, and
(3) low bit rate of secret key generation.
This paper introduces high rate uncorrelated bit extraction (HRUBE), a framework for interpolating, transforming for de-correlation, and encoding channel measurements using a multi-bit adaptive quantization scheme which allows multiple bits per component.
We present an analysis of the probability of bit disagreement in generated secret keys, and we use experimental data to demonstrate the HRUBE scheme and to quantify its experimental performance. As two examples, the implemented HRUBE system can achieve 22 bits per second at a bit disagreement rate of 2.2%, or 10 bits per second at a bit disagreement rate of 0.54%.

INDEX TERMS
Wireless Networks, Multipath Fading, Physical Layer, Cryptography, Key Generation

======================================================
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
VOL. 21, NO. 1, JANUARY 2010

IRM: INTEGRATED FILE REPLICATION AND CONSISTENCY MAINTENANCE IN P2P SYSTEMS

ABSTRACT
In peer-to-peer file sharing systems, file replication and consistency maintenance are widely used techniques for high system performance. Despite significant interdependencies between them, these two issues are typically addressed separately.
Most file replication methods rigidly specify replica nodes, leading to low replica utilization, unnecessary replicas and hence extra consistency maintenance overhead. Most consistency maintenance methods propagate update messages based on message spreading or a structure without considering file replication dynamism, leading to inefficient file update and hence high possibility of outdated file response.
This paper presents an Integrated file Replication and consistency Maintenance mechanism (IRM) that integrates the two techniques in a systematic and harmonized manner. It achieves high efficiency in file replication and consistency maintenance at a significantly low cost. Instead of passively accepting replicas and updates, each node determines file replication and update polling by dynamically adapting to time-varying file query and update rates, which avoids unnecessary file replications and updates.
Simulation results demonstrate the effectiveness of IRM in comparison with other approaches. It dramatically reduces overhead and yields significant improvements on the efficiency of both file replication and consistency maintenance approaches.

INDEX TERMS
File replication, consistency maintenance, peer-to-peer, distributed hash table.

======================================================
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
VOL. 22, NO. 5, MAY 2010

ISO-MAP: ENERGY-EFFICIENT CONTOUR MAPPING IN WIRELESS SENSOR NETWORKS

ABSTRACT
Contour mapping is a crucial part of many wireless sensor network applications. Many efforts have been made to avoid collecting data from all the sensors in the network and producing maps at the sink, which is proven to be inefficient.
The existing approaches (often aggregation based), however, suffer from heavy transmission traffic and incur large computational overheads on each sensor node. We propose Iso-Map, an energy-efficient protocol for contour mapping, which builds contour maps based solely on the reports collected from intelligently selected “isoline nodes” in wireless sensor networks.
Iso-Map achieves high-quality contour mapping while significantly reducing the generated traffic from O(n) to O(p), where n is the total number of sensor nodes in the field.
The pernode computation overhead is also restrained as a constant. We conduct comprehensive trace-driven simulations to verify this protocol, and demonstrate that Iso-Map outperforms the previous approaches in the sense that it produces contour maps of high fidelity with significantly reduced energy cost.

INDEX TERMS
Distributed applications, query processing, terrain mapping, wireless sensor networks.

======================================================
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING

K-ANONYMITY IN THE PRESENCE
OF EXTERNAL DATABASES

ABSTRACT
The concept of k-anonymity has received considerable attention due to the need of several organizations to release microdata without revealing the identity of individuals. Although all previous k-anonymity techniques assume the existence of a public database (PD) that can be used to breach privacy, none utilizes PD during the anonymization process.
Specifically, existing generalization algorithms create anonymous tables using only the microdata table (MT) to be published, independently of the external knowledge available.
This omission leads to high information loss. Motivated by this observation we first introduce the concept of k-join-anonymity (KJA), which permits more effective generalization to reduce the information loss.
Briefly, KJA anonymizes a superset of MT, which includes selected records from PD. We propose two methodologies for adapting k-anonymity algorithms to their KJA counterparts.
The first generalizes the combination of MT and PD, under the constraint that each group should contain at least one tuple of MT (otherwise, the group is useless and discarded).
The second anonymizes MT, and then refines the resulting groups using PD. Finally, we evaluate the effectiveness of our contributions with an extensive experimental evaluation using real and synthetic datasets.

INDEX TERMS
Privacy, k-anonymity.

======================================================

IEEE/ACM TRANSACTIONS ON NETWORKING

LABEL-BASED DV-HOP LOCALIZATION AGAINSTWORMHOLE ATTACKS IN WIRELESS SENSOR NETWORKS

Node localization becomes an important issue in the wireless sensor network as its broad applications in environment monitoring, emergency rescue and battlefield surveillance, etc. Basically, the DV-Hop localization mechanism can work well with the assistance of beacon nodes that have the capability of self-positioning.
However, if the network is invaded by a wormhole attack, the attacker can tunnel the packets via the wormhole link to cause severe impacts on the DV-Hop localization process. The distance-vector propagation phase during the DVHop localization even aggravates the positioning result, compared to the localization schemes without wormhole attacks.
In this paper, we analyze the impacts of wormhole attack on DV-Hop localization scheme. Based on the basic DV-Hop localization process, we propose a label-based secure localization scheme to defend against the wormhole attack.
Simulation results demonstrate that our proposed secure localization scheme is capable of detecting the wormhole attack and resisting its adverse impacts with a high probability.

Keywords:
DV-Hop localization; wireless sensor networks; wormhole attack.

======================================================

IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING
VOL. 7, NO. 1, JANUARY-MARCH 2010

LAYERED APPROACH USING CONDITIONAL RANDOM FIELDS FOR INTRUSION DETECTION

ABSTRACT
Intrusion detection faces a number of challenges; an intrusion detection system must reliably detect malicious activities in a network and must perform efficiently to cope with the large amount of network traffic.
In this paper, we address these two issues of Accuracy and Efficiency using Conditional Random Fields and Layered Approach. We demonstrate that high attack detection accuracy can be achieved by using Conditional Random Fields and high efficiency by implementing the Layered Approach.
Experimental results on the benchmark KDD ’99 intrusion data set show that our proposed system based on Layered Conditional Random Fields outperforms other well-known methods such as the decision trees and the naive Bayes.
The improvement in attack detection accuracy is very high, particularly, for the U2R attacks (34.8 percent improvement) and the R2L attacks (34.5 percent improvement). Statistical Tests also demonstrate higher confidence in detection accuracy for our method. Finally, we show that our system is robust and is able to handle noisy data without compromising performance.

INDEX TERMS
Intrusion detection, Layered Approach, Conditional Random Fields, network security, decision trees, naive Bayes.

======================================================

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
VOL. 22, NO. 1, JANUARY 2010 59

LIGHT: A QUERY-EFFICIENT YET LOW-MAINTENANCE INDEXING SCHEME OVER DHTS

ABSTRACT
DHT is a widely used building block for scalable P2P systems. However, as uniform hashing employed in DHTs destroys data locality, it is not a trivial task to support complex queries (e.g., range queries and k-nearest-neighbor queries) in DHT-based P2P systems. In order to support efficient processing of such complex queries, a popular solution is to build indexes on top of the DHT.
Unfortunately, existing over-DHT indexing schemes suffer from either query inefficiency or high maintenance cost. In this paper, we propose LIGhtweight Hash Tree (LIGHT)—a query-efficient yet low-maintenance indexing scheme.
LIGHT employs a novel naming mechanism and a tree summarization strategy for graceful distribution of its index structure. We show through analysis that it can support various complex queries with near-optimal performance. Extensive experimental results also demonstrate that, compared with state of the art over-DHT indexing schemes, LIGHT saves 50-75 percent of index maintenance cost and substantially improves query performance in terms of both response time and bandwidth consumption.
In addition, LIGHT is designed over generic DHTs and hence can be easily implemented and deployed in any DHT-based P2P system.

INDEX TERMS
Distributed hash tables, indexing, complex queries.

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING
VOL. 9, NO. 9, SEPTEMBER 2010

MAXIMIZING THE LIFETIME OF WIRELESS
SENSOR NETWORKS WITH MOBILE SINK
IN DELAY-TOLERANT APPLICATIONS

ABSTRACT
This paper proposes a framework to maximize the lifetime of the wireless sensor networks (WSNs) by using a mobile sink when the underlying applications tolerate delayed information delivery to the sink.
Within a prescribed delay tolerance level, each node does not need to send the data immediately as it becomes available. Instead, the node can store the data temporarily and transmit it when the mobile sink is at the most favorable location for achieving the longest WSN lifetime.
To find the best solution within the proposed framework, we formulate optimization problems that maximize the lifetime of the WSN subject to the delay bound constraints, node energy constraints, and flow conservation constraints.
We conduct extensive computational experiments on the optimization problems and find that the lifetime can be increased significantly as compared to not only the stationary sink model but also more traditional mobile sink models. We also show that the delay tolerance level does not affect the maximum lifetime of the WSN.

INDEX TERMS
Wireless sensor network, lifetime maximization, linear programming, delay-tolerant applications, mobile sink.

======================================================

NETWORKING, IEEE/ACM TRANSACTIONS ON
ISSUE DATE: APRIL 2010

MINIMIZING DELAY AND MAXIMIZING LIFETIME FOR WIRELESS SENSOR NETWORKS WITH ANYCAST

ABSTRACT
In this paper, we are interested in minimizing the delay and maximizing the lifetime of event-driven wireless sensor networks for which events occur infrequently. In such systems, most of the energy is consumed when the radios are on, waiting for a packet to arrive. Sleep-wake scheduling is an effective mechanism to prolong the lifetime of these energy-constrained wireless sensor networks.
However, sleep-wake scheduling could result in substantial delays because a transmitting node needs to wait for its next-hop relay node to wake up. An interesting line of work attempts to reduce these delays by developing Â¿anycastÂ¿-based packet forwarding schemes, where each node opportunistically forwards a packet to the first neighboring node that wakes up among multiple candidate nodes.
In this paper, we first study how to optimize the anycast forwarding schemes for minimizing the expected packet-delivery delays from the sensor nodes to the sink. Based on this result, we then provide a solution to the joint control problem of how to optimally control the system parameters of the sleep-wake scheduling protocol and the anycast packet-forwarding protocol to maximize the network lifetime, subject to a constraint on the expected end-to-end packet-delivery delay.
Our numerical results indicate that the proposed solution can outperform prior heuristic solutions in the literature, especially under practical scenarios where there are obstructions, e.g., a lake or a mountain, in the coverage area of the wireless sensor network.

======================================================
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
VOL. 32, NO. 3, MARCH 2010

OBJCUT: EFFICIENT SEGMENTATION USING TOP-DOWN AND BOTTOM-UP CUES

ABSTRACT
We present a probabilistic method for segmenting instances of a particular object category within an image. Our approach overcomes the deficiencies of previous segmentation techniques based on traditional grid conditional random fields (CRF), namely that
1) they require the user to provide seed pixels for the foreground and the background and
2) they provide a poor prior for specific shapes due to the small neighborhood size of grid CRF.
Specifically, we automatically obtain the pose of the object in a given image instead of relying on manual interaction. Furthermore, we employ a probabilistic model which includes shape potentials for the object to incorporate top-down information that is global across the image, in addition to the grid clique potentials which provide the bottom-up information used in previous approaches. The shape potentials are provided by the pose of the object obtained using an object category model. We represent articulated object categories using a novel layered pictorial structures model. Nonarticulated object categories are modeled using a set of exemplars. These object category models have the advantage that they can handle large intraclass shape, appearance, and spatial variation. We develop an efficient method, OBJCUT, to obtain segmentations using our probabilistic framework.
Novel aspects of this method include:
1) efficient algorithms for sampling the object category models of our choice and
2) the observation that a sampling-based approximation of the expected log-likelihood of the model can be increased by a single graph cut. Results are presented on several articulated (e.g., animals) and nonarticulated (e.g., fruits) object categories.
We provide a favorable comparison of our method with the state of the art in object category specific image segmentation, specifically the methods of Leibe and Schiele and Schoenemann and Cremers.

INDEX TERMS
Object category specific segmentation, conditional random fields, generalized EM, graph cuts

======================================================
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
VOL. 36, NO. 3, MAY/JUNE 2010 409

ON EVENT-BASED MIDDLEWARE FOR
LOCATION-AWARE MOBILE APPLICATIONS

ABSTRACT
As mobile applications become more widespread, programming paradigms and middleware architectures designed to support their development are becoming increasingly important. The event-based programming paradigm is a strong candidate for the development of mobile applications due to its inherent support for the loose coupling between components required by mobile applications.
However, existing middleware that supports the event-based programming paradigm is not well suited to supporting location-aware mobile applications in which highly mobile components come together dynamically to collaborate at some location.
This paper presents a number of techniques including location-independent announcement and subscription coupled with location dependent filtering and event delivery that can be used by event-based middleware to support such collaboration.
We describe how these techniques have been implemented in STEAM, an event-based middleware with a fully decentralized architecture, which is particularly well suited to deployment in ad hoc network environments. The cost of such location-based event dissemination and the benefits of distributed event filtering are evaluated.

INDEX TERMS
Distributed systems, middleware, publish subscribe, event-based communication, mobile computing, collaborative and location-aware applications, wireless ad hoc networks

======================================================

IEEE TRANSACTIONS ON SIGNAL PROCESSING
VOL. 58, NO. 2, FEBRUARY 2010

ON LOCAL INTRINSIC DIMENSION ESTIMATION AND ITS APPLICATIONS

ABSTRACT
In this paper, we present multiple novel applications for local intrinsic dimension estimation. There has been much work done on estimating the global dimension of a data set, typically for the purposes of dimensionality reduction.
We show that by estimating dimension locally, we are able to extend the uses of dimension estimation to many applications, which are not possible with global dimension estimation.
Additionally, we show that local dimension estimation can be used to obtain a better global dimension estimate, alleviating the negative bias that is common to all known dimension estimation algorithms.
We illustrate local dimension estimation’s uses towards additional applications, such as learning on statistical manifolds, network anomaly detection, clustering, and image segmentation.

INDEX TERMS
Geodesics, image segmentation, intrinsic dimension, manifold learning, nearest neighbor graph.

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING
VOL. 9, NO. 4, APRIL 2010

ON MULTIHOP DISTANCES IN WIRELESS SENSOR NETWORKS WITH RANDOM NODE LOCATIONS

ABSTRACT
Location and intersensor distance estimations are important functions for the operation of wireless sensor networks, especially when protocols can benefit from the distance information prior to network deployment.
The maximum multihop distance that can be covered in a given number of hops in a sensor network is one such parameter related with coverage area, delay, and minimal multihop transmission energy consumption estimations. In randomly deployed sensor networks, intersensor distances are random variables.
Hence, their evaluations require probabilistic methods, and distance models should involve investigation of distance distribution functions. Current literature on analytical modeling of the maximum distance distribution is limited to 1D networks using the Gaussian pdf.
However, determination of the maximum multihop distance distribution in 2D networks is a quite complex problem. Furthermore, distance distributions in 2D networks are not accurately modeled by the Gaussian pdf.
Hence, we propose a greedy method of distance maximization and evaluate the distribution of the obtained multihop distance through analytical approximations and simulations.

INDEX TERMS
Wireless sensor networks, multihop distance, analysis, simulation, estimation.

======================================================

NETWORKING, IEEE/ACM TRANSACTIONS ON
August 2010 (vol. 9 no. 8)

ON THE PERFORMANCE BOUNDS OF PRACTICAL WIRELESS NETWORK CODING

ABSTRACT
Network coding is an attracting technology that has been shown to be able to improve the throughput of wireless networks. However, there still lacks fundamental understanding on how network coding works under realistic scenarios.
In this paper, we examine the performance of a recently proposed network coding system under a realistic wireless physical layer and practical random access mechanisms. We propose a key performance measure called “encoding number”—the number of packets that can be encoded via network coding in each transmission.
We provide an upper bound on the encoding number for the general coding topology, and derive the average encoding number and system throughput for a general class of random access mechanisms.
Based on the practical coding system, we also derive a tighter upper bound on the throughput gain for a general wireless network. Our results are of fundamental value for coding-related MAC/Routing protocol design and analysis.

======================================================
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
JANUARY 2010

OPPORTUNITIES AND CHALLENGES IN OFDMA-BASED CELLULAR RELAY NETWORKS: A RADIO RESOURCE MANAGEMENT PERSPECTIVE

ABSTRACT
The opportunities and flexibility in relay networks and orthogonal frequency-division multiple-access (OFDMA) make the combination a suitable candidate network and airinterface technologies for providing reliable and ubiquitous high data rate coverage in the next-generation cellular networks.
Advanced and intelligent radio resource management (RRM) schemes are known to be crucial towards harnessing these opportunities in the future OFDMA-based relay-enhanced cellular networks. However, it is not very clear how to address the new RRM challenges (such as enabling distributed algorithms, intra/inter-cell routing, intense and dynamic co-channel interference, feedback overhead) in such complex environments comprising a plethora of relay stations of different functionalities and characteristics.
Employment of conventional RRM schemes in such networks will be highly inefficient if not infeasible. The next-generation networks are required to meet the expectations of all wireless users irrespective of their locations. High data rate connectivity with mobility and reliability, among other features, are examples of these expectations. Therefore, fairness is a critical performance aspect that has to be taken into account in the design of prospective RRM schemes.
This paper reviews some of the prominent challenges involved in migrating from the conventional cellular architecture to the relay-based type and discusses how intelligent RRM schemes can exploit the opportunities in the relay-enhanced OFDMA-based cellular networks. We identify the role of multi-antenna systems and explore the current approaches in literature to extend the conventional schedulers to the next-generation relay networks.
The paper also highlights the fairness aspect in such networks in the light of the recent literature, provides some example fairness metrics, and compares the performances of some representative algorithms.

INDEX TERMS
RRM, OFDMA, cellular, relaying, scheduling, routing, throughput, fairness, load balancing.

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING
VOL. 9, NO. 6, JUNE 2010 865

OPTIMAL ACCOUNTING POLICIES FOR AAA SYSTEMS IN MOBILE TELECOMMUNICATIONS NETWORKS

ABSTRACT
Authentication, Authorization, and Accounting (AAA) deployments are expected to grow significantly in emerging mobile systems as they offer a plethora of services and mobile applications. In current systems, network access servers (NAS) periodically report the service usage of mobile users located within their coverage areas.
The periodic reports are used by the billing systems to minimize the incurred capital losses if the serving NAS fails. While shorter reporting intervals are desired for lower losses, they can potentially result in undesirably high signaling load. Because it is prohibitively difficult to obtain optimal reporting intervals in mobile systems due to multitudes of services with different mobility profiles, current accounting standards offer no quantitative measures for selecting a proper reporting interval and AAA systems are typically designed via over provisioning.
To address this issue, we propose an adaptive optimization mechanism in multiservice AAA systems which limits the potential loss without excessively generating unnecessary usage reports. Our optimization mechanism embraces the current AAA IETF standards RADIUS and its successor Diameter and does not require any modifications to the AAA protocols nor to the network access servers’ implementation, and its implementation scope is limited to the AAA systems.
The results demonstrate that our mechanism is robust under various operational conditions, easy to implement, and offers considerable potential for loss control compared to the current static approaches.

INDEX TERMS
AAA, accounting, RADIUS, Diameter, accounting interim interval.

======================================================
MOBILE COMPUTING, IEEE TRANSACTIONS ON
ISSUE DATE: AUG. 2010, VOLUME: 9 ISSUE:8

OPTIMAL JAMMING ATTACK STRATEGIES AND NETWORK DEFENSE POLICIES IN WIRELESS SENSOR NETWORKS

ABSTRACT
We consider a scenario where a sophisticated jammer jams an area in which a single-channel random-access-based wireless sensor network operates. The jammer controls the probability of jamming and the transmission range in order to cause maximal damage to the network in terms of corrupted communication links.
The jammer action ceases when it is detected by the network (namely by a monitoring node), and a notification message is transferred out of the jammed region. The jammer is detected by employing an optimal detection test based on the percentage of incurred collisions. On the other hand, the network defends itself by computing the channel access probability to minimize the jamming detection plus notification time.
The necessary knowledge of the jammer in order to optimize its benefit consists of knowledge about the network channel access probability and the number of neighbors of the monitor node. Accordingly, the network needs to know the jamming probability of the jammer. We study the idealized case of perfect knowledge by both the jammer and the network about the strategy of each other and the case where the jammer and the network lack this knowledge.
The latter is captured by formulating and solving optimization problems where the attacker and the network respond optimally to the worst-case or the average-case strategies of the other party. We also take into account potential energy constraints of the jammer and the network.
We extend the problem to the case of multiple observers and adaptable jamming transmission range and propose a meaningful heuristic algorithm for an efficient jamming strategy.
Our results provide valuable insights about the structure of the jamming problem and associated defense mechanisms and demonstrate the impact of knowledge as well as adoption of sophisticated strategies on achieving desirable performance.

======================================================

KNOWLEDGE AND DATA ENGINEERING, IEEE TRANSACTIONS ON
ISSUE DATE: MARCH 2010, VOLUME: 22 ISSUE:3

VIDE: A VISION-BASED APPROACH FOR
DEEP WEB DATA EXTRACTION

ABSTRACT
Deep Web contents are accessed by queries submitted to Web databases and the returned data records are enwrapped in dynamically generated Web pages (they will be called deep Web pages in this paper).
Extracting structured data from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a large number of techniques have been proposed to address this problem, but all of them have inherent limitations because they are Web-page-programming-language-dependent.
As the popular two-dimensional media, the contents on Web pages are always displayed regularly for users to browse. This motivates us to seek a different way for deep Web data extraction to overcome the limitations of previous works by utilizing some interesting common visual features on the deep Web pages. In this paper, a novel vision-based approach that is Web-page-programming-language-independent is proposed.
This approach primarily utilizes the visual features on the deep Web pages to implement deep Web data extraction, including data record extraction and data item extraction. We also propose a new evaluation measure revision to capture the amount of human effort needed to produce perfect extraction. Our experiments on a large set of Web databases show that the proposed vision-based approach is highly effective for deep Web data extraction.

======================================================
IEEE/ACM TRANSACTIONS ON NETWORKING
VOL. 18, NO. 1, FEBRUARY 2010 1

POPI: A USER-LEVEL TOOL FOR INFERRING ROUTER PACKET FORWARDING PRIORITY

ABSTRACT
Packet forwarding prioritization (PFP) in routers is one of the mechanisms commonly available to network operators. PFP can have a significant impact on the accuracy of network measurements, the performance of applications and the effectiveness of network troubleshooting procedures.
Despite its potential impacts, no information on PFP settings is readily available to end users. In this paper, we present an end-to-end approach for PFP inference and its associated tool, POPI. This is the first attempt to infer router packet forwarding priority through end-to-end measurement.
POPI enables users to discover such network policies through measurements of packet losses of different packet types. We evaluated our approach via statistical analysis, simulation and wide-area experimentation in PlanetLab. We employed POPI to analyze 156 paths among 162 PlanetLab sites.
POPI flagged 15 paths with multiple priorities, 13 of which were further validated through hop-by-hop loss rates measurements. In addition, we surveyed all related network operators and received responses for about half of them all confirming our inferences.
Besides, we compared POPI with the inference mechanisms through other metrics such as packet reordering [called out-of-order (OOO)]. OOO is unable to find many priority paths such as those implemented via traffic policing. On the other hand, interestingly, we found it can detect existence of the mechanisms, which induce delay differences among packet types such as slow processing path in the router and port-based load sharing.

INDEX TERMS
Network inference, network neutrality, packet forwarding priority.

======================================================
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
VOL. 21, NO. 3, MARCH 2010 313

PRIVACY-CONSCIOUS LOCATION-BASED QUERIES IN MOBILE ENVIRONMENTS

ABSTRACT
In location-based services, users with location-aware mobile devices are able to make queries about their surroundings anywhere and at any time. While this ubiquitous computing paradigm brings great convenience for information access, it also raises concerns over potential intrusion into user location privacy.
To protect location privacy, one typical approach is to cloak user locations into spatial regions based on user-specified privacy requirements, and to transform location-based queries into region-based queries. In this paper, we identify and address three new issues concerning this location cloaking approach.
First, we study the representation of cloaking regions and show that a circular region generally leads to a small result size for region-based queries. Second, we develop a mobility-aware location cloaking technique to resist trace analysis attacks. Two cloaking algorithms, namely MaxAccu_Cloak and MinComm_Cloak, are designed based on different performance objectives.
Finally, we develop an efficient polynomial algorithm for evaluating circular-region-based kNN queries. Two query processing modes, namely bulk and progressive, are presented to return query results either all at once or in an incremental manner. Experimental results show that our proposed mobility-aware cloaking algorithms significantly improve the quality of location cloaking in terms of an entropy measure without compromising much on query latency or communication cost.
Moreover, the progressive query processing mode achieves a shorter response time than the bulk mode by parallelizing the query evaluation and result transmission.

INDEX TERMS
Location-based services, location privacy, query processing, mobile computing.

======================================================

PRIVACY-PRESERVING PUBLIC AUDITING FOR DATA STORAGE SECURITY IN CLOUD COMPUTING

Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources.
By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities.
Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed.
To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met:
1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user;
2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements.
To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.

======================================================

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
VOL. 22, NO. 4, APRIL 2010

PROSPECTIVE INFECTIOUS DISEASE OUTBREAK DETECTION USING MARKOV SWITCHING MODELS

ABSTRACT
Accurate and timely detection of infectious disease outbreaks provides valuable information which can enable public health officials to respond to major public health threats in a timely fashion. However, disease outbreaks are often not directly observable. For surveillance systems used to detect outbreaks, noises caused by routine behavioral patterns and by special events can further complicate the detection task.
Most existing detection methods combine a time series filtering procedure followed by a statistical surveillance method. The performance of this “two-step” detection method is hampered by the unrealistic assumption that the training data are outbreak-free.
Moreover, existing approaches are sensitive to extreme values, which are common in real-world data sets. We considered the problem of identifying outbreak patterns in a syndrome count time series using Markov switching models. The disease outbreak states are modeled as hidden state variables which control the observed time series.
A jump component is introduced to absorb sporadic extreme values that may otherwise weaken the ability to detect slow-moving disease outbreaks. Our approach outperformed several state-of-the-art detection methods in terms of detection sensitivity using both simulated and real-world data.

INDEX TERMS
Markov switching models, syndromic surveillance, Gibbs sampling, outbreak detection.

======================================================

IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
VOL. 21, NO. 5, MAY 2010 631

QUALITY OF TRILATERATION: CONFIDENCE-BASED ITERATIVE LOCALIZATION

ABSTRACT
The proliferation of wireless and mobile devices has fostered the demand for context-aware applications, in which location is one of the most significant contexts. Multilateration, as a basic building block of localization, however, has not yet overcome the challenges of 1) poor ranging measurements; 2) dynamic and noisy environments; and 3) fluctuations in wireless communications.
Hence, multilateration-based approaches often suffer from poor accuracy and can hardly be employed in practical applications. In this study, we propose Quality of Trilateration (QoT) that quantifies the geometric relationship of objects and ranging noises. Based on QoT, we design a confidence-based iterative localization scheme, in which nodes dynamically select trilaterations with the highest quality for location computation.
To validate this design, a prototype network based on wireless sensor motes is deployed and the results show that QoT well represents trilateration accuracy, and the proposed scheme significantly improves localization accuracy.

INDEX TERMS
Localization, noisy range measurements, trilateration, wireless ad-hoc and sensor networks.

======================================================

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
VOL. 9, NO. 6, JUNE 2010 2101

RANDOM ACCESS TRANSPORT CAPACITY

ABSTRACT
We develop a new metric for quantifying end to end throughput in multihop wireless networks, which we term random access transport capacity, since the interference model presumes uncoordinated transmissions.
The metric quantifies the average maximum rate of successful end-to-end transmissions, multiplied by the communication distance, and normalized by the network area. We show that a simple upper bound on this quantity is computable in closed-form in terms of key network parameters when the number of retransmissions is not restricted and the hops are assumed to be equally spaced on a line between the source and destination.
We also derive the optimum number of hops and optimal per hop success probability and show that our result follows the well-known square root scaling law while providing exact expressions for the preconstants, which contain most of the design-relevant network parameters.
Numerical results demonstrate that the upper bound is accurate for the purpose of determining the optimal hop count and success (or outage) probability.

INDEX TERMS
Transmission capacity, transport capacity, network information theory, stochastic geometry, ad hoc networks

======================================================

NETWORKING, IEEE/ACM TRANSACTIONS ON
ISSUE DATE: JUNE 2010

RATE CONTROL WITH PAIRWISE
INTERSESSION NETWORK CODING

ABSTRACT
In this paper, we develop a distributed rate-control algorithm for networks with multiple unicast sessions when network coding is allowed across different sessions. Building on recent flow-based characterization of pairwise intersession network coding, the corresponding optimal rate-control problem is formulated as a convex optimization problem.
The formulation exploits pairwise coding possibilities between any pair of sessions, where any coded symbol is formed by coding over at most two original symbols. The objective function is the sum of the utilities based on the rates supported by each unicast session. Working on the Lagrangian of the formulated problem, a distributed algorithm is developed with little coordination among intermediate nodes. Each unicast session has the freedom to choose its own utility function.
The only information exchange required by the source is the weighted sum of the queue length of each link, which can be piggybacked to the acknowledgment messages. In addition to the optimal rate-control algorithm, we propose a decentralized pairwise random coding scheme that decouples the decision of coding from that of rate control, which further enhances the distributiveness of the proposed scheme.
The convergence of the rate-control algorithm is proven analytically and verified by extensive simulations. Simulation results also demonstrate the advantage of the proposed algorithm over the state-of-the-art in terms of both throughput and fairness.

======================================================

IEEE TRANSACTIONS ON IMAGE PROCESSING
VOL. 19, NO. 5, MAY 2010

RATE-DISTORTION OPTIMIZED BITSTREAM EXTRACTOR FOR MOTION SCALABILITY IN WAVELET-BASED SCALABLE VIDEO CODING

ABSTRACT
Motion scalability is designed to improve the coding efficiency of a scalable video coding framework, especially in the medium to low range of decoding bit rates and spatial resolutions. In order to fully benefit from the superiority of motion scalability, a rate-distortion optimized bitstream extractor, which determines the optimal motion quality layer for any specific decoding scenario, is required.
In this paper, the determination process first starts off with a brute force searching algorithm. Although guaranteed by the optimal performance within the search domain, it suffers from high computational complexities.
Two properties, i.e., the monotonically nondecreasing property and the unimodal property, are then derived to accurately describe the rate-distortion behavior of motion scalability.
Based on these two properties, modified searching algorithms are proposed to reduce the complexity (up to five times faster) and to achieve the global optimality, even for those decoding scenarios outside the search domain.

INDEX TERMS
Bitstream extractor, motion scalability, rate distortion optimization, scalable video coding

======================================================

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
VOL. 9, NO. 3, MARCH 2010 1

RATELESS CODING FOR HYBRID FREE-SPACE OPTICAL AND RADIO-FREQUENCY COMMUNICATION

ABSTRACT
Free-space optical (FSO) transmission systems enable high-speed communication with relatively small deployment costs. However, FSO suffers a critical disadvantage, namely susceptibility to fog, smoke, and conditions alike. A possible solution to this dilemma is the use of hybrid systems employing FSO and radio frequency (RF) transmission.
In this paper we propose the application of a rateless coded automatic repeatrequest scheme for such hybrid FSO/RF systems.
The advantages of our approach are
(a) the full utilization of available FSO and RF channel resources at any time, regardless of FSO or RF channel conditions and temporal variations, and
(b) no need for a-priori rate selection at the transmitter.
In order to substantiate these claims, we establish the pertinent capacity limits for hybrid FSO/RF transmission and present simulation results for transmission with off-the-shelf Raptor codes, which achieve realized rates close to these limits under a wide range of channel conditions.
We also show that in conditions of strong atmospheric turbulence, rateless coding is advantageous over fixed-rate coding with rate adaptation at the transmitter.

INDEX TERMS
Free-space optical (FSO), hybrid FSO/RF transmission, rateless coding, channel capacity

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING
JULY 2010 (VOL. 9 NO. 7)

REDUCING POWER CONSUMPTION
WITH QOS CONSTRAINTS IN
IEEE 802.16E WIRELESS NETWORKS

ABSTRACT
Mobile Broadband Wireless Access (BWA) networks will offer in the forthcoming years multiple and differentiated services to users with high mobility requirements, connecting via portable or wearable devices which rely on the use of batteries by necessity.
Since a relatively large fraction of energy is consumed by such devices for transmitting/receiving data over-the-air, mechanisms are needed to reduce power consumption, in order to increase the lifetime of devices, and hence, improve user's satisfaction. The IEEE 802.16, which supports mobile BWA since its "e” amendment in 2005, defined power saving functions at the Medium Access Control (MAC) layer, which are designed to be operated during open traffic sessions for the greatest energy consumption reduction.
However, enabling power saving usually increases the transmission latency, which can negatively affect the Quality of Service (QoS) experienced by users. On the other hand, imposing stringent QoS requirements may limit the amount of energy that can be saved. In this paper, an extensive study of the mutual interaction between power saving mechanisms and QoS support is carried out in the context of the IEEE 802.16e.
In particular, two types of delay-constrained applications with different requirements are considered, i.e., Web and Voice over IP (VoIP) for which the IEEE 802.16e standard specifies two different power saving classes.
The performance is assessed via detailed packet-level simulation, with respect to several system parameters. To capture the relative contribution of all the factors on the energy- and QoS-related metrics, part of the evaluation is carried out by means of {2}^k \cdot r! analysis.

======================================================

IEEE TRANSACTIONS ON COMPUTERS
VOL. 59, NO. 5, MAY 2010

REDUNDANT-DIGIT FLOATING-POINT ADDITION SCHEME BASED ON A STORED ROUNDING VALUE

ABSTRACT
Due to the widespread use and inherent complexity of floating-point addition, much effort has been devoted to its speedup via algorithmic and circuit techniques.
We propose a new redundant-digit representation for floating-point numbers that leads to computation speedup in two ways:
1) Reducing the per-operation latency when multiple floating-point additions are performed before result conversion to nonredundant format and
2) Removing the addition associated with rounding. While the first of these advantages is offered by other redundant representations, the second one is unique to our approach, which replaces the power- and area-intensive rounding addition by low-latency insertion of a rounding two-valued digit, or twit, in a position normally assigned to a redundant twit within the redundant-digit format. Instead of conventional sign-magnitude representation, we use a sign-embedded encoding that leads to lower hardware redundancy, and thus, reduced power dissipation.
While our intermediate redundant representations remain incompatible with the IEEE 754-2008 standard, many application-specific systems, such as those in DSP and graphics domains, can benefit from our designs. Description of our radix-16 redundant representation and its addition algorithm is followed by the architecture of a floating-point adder based on this representation.
Detailed circuit designs are provided for many of the adder’s critical subfunctions. Simulation and synthesis based on a 0:13 m CMOS standard process show a latency reduction of 15 percent or better, and both area and power savings of around 58 percent, compared with the best designs reported in the literature.

INDEX TERMS
Adder/subtractor, computer arithmetic, floating point, redundant format, rounding, signed-digit number system.

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING

RELIABLE ANCHOR-BASED
SENSOR LOCALIZATION IN IRREGULAR AREAS

Localization is a fundamental problem in wireless sensor networks and its accuracy impacts the efficiency of location-aware protocols and applications, such as routing and storage. Most previous localization algorithms assume that sensors are distributed in regular areas without holes or obstacles, which often does not reflect real-world conditions, especially for outdoor deployment of wireless sensor networks.
In this paper, we propose a novel scheme called reliable anchor-based localization (RAL), which can greatly reduce the localization error due to the irregular deployment areas. We first provide theoretical analysis of the minimum hop length for uniformly distributed networks and then show its close approximation to empirical results, which can assist in the construction of a reliable minimal hop-length table offline.
Using this table, we are able to tell whether a path is severely detoured and compute a more accurate average hop length as the basis for distance estimation.
At runtime, the RAL scheme
1) utilizes the reliable minimal hop length from the table as the threshold to differentiate between reliable anchors and unreliable ones, and
2) allows each sensor to determine its position utilizing only distance constraints obtained from reliable anchors.
The simulation results show that RAL can effectively filter out unreliable anchors and therefore improve the localization accuracy.

======================================================

IEEE/ACM TRANSACTIONS ON NETWORKING
VOL. 18, NO. 1, FEBRUARY 2010

RENDERED PATH: RANGE-FREE LOCALIZATION IN ANISOTROPIC SENSOR NETWORKS WITH HOLES

ABSTRACT
Sensor positioning is a crucial part of many location dependent applications that utilize wireless sensor networks (WSNs). Current localization approaches can be divided into two groups: range-based and range-free.
Due to the high costs and critical assumptions, the range-based schemes are often impractical for WSNs. The existing range-free schemes, on the other hand, suffer from poor accuracy and low scalability. Without the help of a large number of uniformly deployed seed nodes, those schemes fail in anisotropic WSNs with possible holes.
To address this issue, we propose the Rendered Path (REP) protocol. To the best of our knowledge, REP is the only range-free protocol for locating sensors with constant number of seeds in anisotropic sensor networks.

Index Terms
Distributed algorithms, distributed computing, multisensor systems, position measurement.

======================================================

NETWORKING, IEEE/ACM TRANSACTIONS ON
ISSUE DATE: APRIL 2010

REPLICATION ROUTING IN DTNS: A RESOURCE ALLOCATION APPROACH

ABSTRACT
Routing protocols for disruption-tolerant networks (DTNs) use a variety of mechanisms, including discovering the meeting probabilities among nodes, packet replication, and network coding. The primary focus of these mechanisms is to increase the likelihood of finding a path with limited information, and so these approaches have only an incidental effect on such routing metrics as maximum or average delivery delay.
In this paper, we present rapid, an intentional DTN routing protocol that can optimize a specific routing metric such as the worst-case delivery delay or the fraction of packets that are delivered within a deadline.
The key insight is to treat DTN routing as a resource allocation problem that translates the routing metric into per-packet utilities that determine how packets should be replicated in the system. We evaluate rapid rigorously through a prototype deployed over a vehicular DTN testbed of 40 buses and simulations based on real traces.
To our knowledge, this is the first paper to report on a routing protocol deployed on a real outdoor DTN. Our results suggest that rapid significantly outperforms existing routing protocols for several metrics.
We also show empirically that for small loads, RAPID is within 10% of the optimal performance.

======================================================

NETWORKING, IEEE/ACM TRANSACTIONS ON
ISSUE DATE: JUNE 2010

S4: SMALL STATE AND SMALL STRETCH
COMPACT ROUTING PROTOCOL FOR
LARGE STATIC WIRELESS NETWORKS

ABSTRACT
Routing protocols for large wireless networks must address the challenges of reliable packet delivery at increasingly large scales and with highly limited resources.
Attempts to reduce routing state can result in undesirable worst-case routing performance, as measured by stretch, which is the ratio of the hop count of the selected path to that of the optimal path. We present a new routing protocol, Small State and Small Stretch (S4), which jointly minimizes the state and stretch.
S4 uses a combination of beacon distance-vector-based global routing state and scoped distance-vector-based local routing state to achieve a worst-case stretch of 3 using $O(sqrt{N})$ routing state per node in an $N$-node network. Its average routing stretch is close to 1.
S4 further incorporates local failure recovery to achieve resilience to dynamic topology changes. We use multiple simulation environments to assess performance claims at scale and use experiments in a 42-node wireless sensor network testbed to evaluate performance under realistic RF and failure dynamics.
The results show that S4 achieves scalability, efficiency, and resilience in a wide range of scenarios.

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING
VOL. 9, NO. 7, JULY 2010

SECURE DATA COLLECTION IN WIRELESS SENSOR
NETWORKS USING RANDOMIZED DISPERSIVE ROUTES

ABSTRACT
Compromised node and denial of service are two key attacks in wireless sensor networks (WSNs). In this paper, we study data delivery mechanisms that can with high probability circumvent black holes formed by these attacks. We argue that classic multipath routing approaches are vulnerable to such attacks, mainly due to their deterministic nature.
So once the adversary acquires the routing algorithm, it can compute the same routes known to the source, hence, making all information sent over these routes vulnerable to its attacks. In this paper, we develop mechanisms that generate randomized multipath routes.
Under our designs, the routes taken by the “shares” of different packets change over time. So even if the routing algorithm becomes known to the adversary, the adversary still cannot pinpoint the routes traversed by each packet. Besides randomness, the generated routes are also highly dispersive and energy efficient, making them quite capable of circumventing black holes.
We analytically investigate the security and energy performance of the proposed schemes. We also formulate an optimization problem to minimize the end-to-end energy consumption under given security constraints. Extensive simulations are conducted to verify the validity of our mechanisms.

INDEX TERMS
Randomized multipath routing, wireless sensor network, secure data delivery.

======================================================
IEEE TRANSACTIONS ON MOBILE COMPUTING
VOL. 9, NO. 6, JUNE 2010

SECURE DISTANCE-BASED LOCALIZATION IN THE PRESENCE OF CHEATING BEACON NODES

ABSTRACT
Secure distance-based localization in the presence of cheating beacon (or anchor) nodes is an important problem in mobile wireless ad hoc and sensor networks. Despite significant research efforts in this direction, some fundamental questions still remain unaddressed: In the presence of cheating beacon nodes, what are the necessary and sufficient conditions to guarantee a bounded error during a two-dimensional distance-based location estimation? Under these necessary and sufficient conditions, what class of localization algorithms can provide this error bound?
In this paper, we attempt to answer these and other related questions by following a careful analytical approach. Specifically, we first show that when the number of cheating beacon nodes is greater than or equal to a given threshold, there do not exist any two-dimensional distance-based localization algorithms that can guarantee a bounded error.
Furthermore, when the number of cheating beacons is below this threshold, we identify a class of distance-based localization algorithms that can always guarantee a bounded localization error.
Finally, we outline three novel distance-based localization algorithms that belong to this class of bounded error localization algorithms. We verify their accuracy and efficiency by means of extensive simulation experiments using both simple and practical distance estimation error models.

INDEX TERMS
Wireless networks, distance-based localization, security.

======================================================

DEPENDABLE AND SECURE COMPUTING, IEEE TRANSACTIONS ON
ISSUE DATE: JAN.-MARCH 2010, VOLUME: 7 ISSUE:1

SIGFREE: A SIGNATURE-FREE BUFFER
OVERFLOW ATTACK BLOCKER

ABSTRACT
We propose SigFree, an online signature-free out-of-the-box application-layer method for blocking code-injection buffer overflow attack messages targeting at various Internet services such as Web service.
Motivated by the observation that buffer overflow attacks typically contain executables whereas legitimate client requests never contain executables in most Internet services, SigFree blocks attacks by detecting the presence of code. Unlike the previous code detection algorithms, SigFree uses a new data-flow analysis technique called code abstraction that is generic, fast, and hard for exploit code to evade.
SigFree is signature free, thus it can block new and unknown buffer overflow attacks; SigFree is also immunized from most attack-side code obfuscation methods. Since SigFree is a transparent deployment to the servers being protected, it is good for economical Internet-wide deployment with very low deployment and maintenance cost.
We implemented and tested SigFree; our experimental study shows that the dependency-degree-based SigFree could block all types of code-injection attack packets (above 750) tested in our experiments with very few false positives. Moreover, SigFree causes very small extra latency to normal client requests when some requests contain exploit code.

======================================================

IEEE TRANSACTIONS ON COMMUNICATIONS
VOL. 58, NO. 4, APRIL 2010 1

STRUCTURE, PROPERTY, AND DESIGN OF NONBINARY REGULAR CYCLE CODES

ABSTRACT
In this paper, we study nonbinary regular LDPC cycle codes whose parity check matrix H has fixed column weight j = 2 and fixed row weight d.
Through graph analysis, we show that the parity check matrix H of a regular cycle code can be put into an equivalent structure in the form of concatenation of row-permuted block-diagonal matrices if d is even, or, if d is odd and the code’s associated graph contains at least one spanning subgraph that consists of disjoint edges.
This equivalent structure of H enables:
i) parallel processing in lineartime encoding;
ii) considerable resource reduction on the code storage for encoding and decoding; and
iii) parallel processing in sequential belief-propagation decoding, which increases the throughput without compromising performance or complexity.
On the code’s structure design, we propose a novel design methodology based on the equivalent structure of H. Finally, we present various numerical results on the code performance and the decoding complexity.

INDEX TERMS
LDPC, regular cycle code, Galois field, graph theory, decoding algorithm, code design

======================================================

NETWORKING, IEEE/ACM TRANSACTIONS ON
ISSUE DATE: JUNE 2010

SYBILLIMIT: A NEAR-OPTIMAL SOCIAL NETWORK DEFENSE AGAINST SYBIL ATTACKS

ABSTRACT
Open-access distributed systems such as peer-to-peer systems are particularly vulnerable to sybil attacks, where a malicious user creates multiple fake identities (called sybil nodes). Without a trusted central authority that can tie identities to real human beings, defending against sybil attacks is quite challenging.
Among the small number of decentralized approaches, our recent SybilGuard protocol leverages a key insight on social networks to bound the number of sybil nodes accepted. Despite its promising direction, SybilGuard can allow a large number of sybil nodes to be accepted. Furthermore, SybilGuard assumes that social networks are fast-mixing, which has never been confirmed in the real world.
This paper presents the novel SybilLimit protocol that leverages the same insight as SybilGuard, but offers dramatically improved and near-optimal guarantees. The number of sybil nodes accepted is reduced by a factor of $Theta(sqrt{n})$, or around 200 times in our experiments for a million-node system. We further prove that SybilLimit's guarantee is at most a $log n$ factor away from optimal when considering approaches based on fast-mixing social networks.
Finally, based on three large-scale real-world social networks, we provide the first evidence that real-world social networks are indeed fast-mixing. This validates the fundamental assumption behind SybilLimit's and SybilGuard's approach.

======================================================
IEEE/ACM TRANSACTIONS ON NETWORKING,
VOL. 18, NO. 2, APRIL 2010

TCAM RAZOR: A SYSTEMATIC APPROACH TOWARDS MINIMIZING PACKET CLASSIFIERS IN TCAMS

ABSTRACT
Packet classification is the core mechanism that enables many networking services on the Internet such as firewall packet filtering and traffic accounting. Using ternary content addressable memories (TCAMs) to perform high-speed packet classification has become the de facto standard in industry. TCAMs classify packets in constant time by comparing a packet with all classification rules of ternary encoding in parallel. Despite their high speed, TCAMs suffer from the well-known range expansion problem.
As packet classification rules usually have fields specified as ranges, converting such rules to TCAM-compatible rules may result in an explosive increase in the number of rules. This is not a problem if TCAMs have large capacities. Unfortunately, TCAMs have very limited capacity, and more rules mean more power consumption and more heat generation for TCAMs.
Even worse, the number of rules in packet classifiers has been increasing rapidly with the growing number of services deployed on the Internet. In this paper, we consider the following problem: given a packet classifier, how can we generate another semantically equivalent packet classifier that requires the least number of TCAM entries?
In this paper, we propose a systematic approach, the TCAM Razor, that is effective, efficient, and practical. In terms of effectiveness, TCAM Razor achieves a total compression ratio of 29.0%, which is significantly better than the previously published best result of 54%. In terms of efficiency, our TCAM Razor prototype runs in seconds, even for large packet classifiers.
Finally, in terms of practicality, our TCAM Razor approach can be easily deployed as it does not require any modification to existing packet classification systems, unlike many previous range encoding schemes.

Index Terms
Algorithm, packet classification, router design, ternary content addressable memory (TCAM) optimization.

======================================================
IEEE TRANSACTIONS ON MOBILE COMPUTING
JULY 2010 (VOL. 9 NO. 7)

TDMA SCHEDULING WITH OPTIMIZED ENERGY EFFICIENCY AND MINIMUM DELAY IN CLUSTERED WIRELESS SENSOR NETWORKS

ABSTRACT
In this paper, we propose a solution to the scheduling problem in clustered wireless sensor networks (WSNs). The objective is to provide network-wide optimized time division multiple access (TDMA) schedules that can achieve high power efficiency, zero conflict, and reduced end-to-end delay.
To achieve this objective, we first build a nonlinear cross-layer optimization model involving the network, medium access control (MAC), and physical layers, which aims at reducing the overall energy consumption.
We solve this problem by transforming the model into two simpler subproblems. Based on the network-wide flow distribution calculated from the optimization model and transmission power on every link, we then propose an algorithm for deriving the TDMA schedules, utilizing the slot reuse concept to achieve minimum TDMA frame length.
Numerical results reveal that our proposed solution reduces the energy consumption and delay significantly, while simultaneously satisfying a specified reliability objective

======================================================

IEEE TRANSACTIONS ON NEURAL NETWORKS
VOLUME 21, ISSUE 6 (JUNE 2010)

THE INFINITE HIDDEN MARKOV
RANDOM FIELD MODEL

ABSTRACT
Hidden Markov random field (HMRF) models are widely used for image segmentation, as they appear naturally in problems where a spatially constrained clustering scheme is asked for.
A major limitation of HMRF models concerns the automatic selection of the proper number of their states, i.e., the number of region clusters derived by the image segmentation procedure. Existing methods, including likelihood- or entropy-based criteria, and reversible Markov chain Monte Carlo methods, usually tend to yield noisy model size estimates while imposing heavy computational requirements.
Recently, Dirichlet process (DP, infinite) mixture models have emerged in the cornerstone of nonparametric Bayesian statistics as promising candidates for clustering applications where the number of clusters is unknown a priori; infinite mixture models based on the original DP or spatially constrained variants of it have been applied in unsupervised image segmentation applications showing promising results.
Under this motivation, to resolve the aforementioned issues of HMRF models, in this paper, we introduce a nonparametric Bayesian formulation for the HMRF model, the infinite HMRF model, formulated on the basis of a joint Dirichlet process mixture (DPM) and Markov random field (MRF) construction.
We derive an efficient variational Bayesian inference algorithm for the proposed model, and we experimentally demonstrate its advantages over competing methodologies.

======================================================

IEEE/ACM TRANSACTIONS ON NETWORKING
VOL. 18, NO. 2, PP. 406 - 419, APRIL 2010

THE OPTIMALITY OF TWO PRICES: MAXIMIZING REVENUE IN A STOCHASTIC COMMUNICATION SYSTEM

ABSTRACT
This paper considers the problem of pricing and transmission scheduling for an Access Point (AP) in a wireless network, where the AP provides service to a set of mobile users.
The goal of the AP is to maximize its own time-average profit. We first obtain the optimum time-average profit of the AP and prove the “Optimality of Two Prices” theorem. We then develop an online scheme that jointly solves the pricing and transmission scheduling problem in a dynamic environment.
The scheme uses an admission price and a business decision as tools to regulate the incoming traffic and to maximize revenue. We show the scheme can achieve any average profit that is arbitrarily close to the optimum, with a tradeoff in average delay.
This holds for general Markovian dynamics for channel and user state variation, and does not require a-priori knowledge of the Markov model. The model and methodology developed in this paper are general and apply to other stochastic settings where a single party tries to maximize its time-average profit.

Index Terms
Wireless Mesh Network, Pricing, Queueing, Dynamic Control, Lyapunov analysis, Optimization

======================================================

NETWORKING, IEEE/ACM TRANSACTIONS ON
ISSUE DATE: APRIL 2010

TOWARD PRACTICAL OPPORTUNISTIC ROUTING WITH INTRA-SESSION NETWORK CODING FOR MESH NETWORKS

ABSTRACT
We consider opportunistic routing in wireless mesh networks. We exploit the inherent diversity of the broadcast nature of wireless by making use of multipath routing.
We present a novel optimization framework for opportunistic routing based on network utility maximization (NUM) that enables us to derive optimal flow control, routing, scheduling, and rate adaptation schemes, where we use network coding to ease the routing problem.
All previous work on NUM assumed unicast transmissions; however, the wireless medium is by its nature broadcast and a transmission will be received by multiple nodes. The structure of our design is fundamentally different; this is due to the fact that our link rate constraints are defined per broadcast region instead of links in isolation.
We prove optimality and derive a primal-dual algorithm that lays the basis for a practical protocol. Optimal MAC scheduling is difficult to implement, and we use 802.11-like random scheduling rather than optimal in our comparisons. Under random scheduling, our protocol becomes fully decentralized (we assume ideal signaling).
The use of network coding introduces additional constraints on scheduling, and we propose a novel scheme to avoid starvation. We simulate realistic topologies and show that we can achieve 20%-200% throughput improvement compared to single path routing, and several times compared to a recent related opportunistic protocol (MORE).

======================================================

IEEE TRANSACTIONS ON COMPUTERS
VOL. 59, NO. 7, JULY 2010 969

TSS: EFFICIENT TERM SET SEARCH IN LARGE PEER-TO-PEER TEXTUAL COLLECTIONS

ABSTRACT
Previous multikeyword search in DHT-based P2P systems often relies on multiple single keyword search operations, suffering from unacceptable traffic cost and poor accuracy.
Precomputing term-set-based index can significantly reduce the cost but needs exponentially growing index size. Based on our observations that
1) queries are typically short and
2) users usually have limited interests, we propose a novel index pruning method, called TSS.
By solely publishing the most relevant term sets from documents on the peers, TSS provides comparable search performance with a centralized solution, while the index size is reduced from exponential to the scale of O(nlog(n)).
We evaluate this design through comprehensive trace-driven simulations using the TREC WT10G data collection and the query log of a major commercial search engine.

INDEX TERMS
Peer-to-peer, multikeyword searching, ranking.

======================================================

IEEE TRANSACTIONS ON MOBILE COMPUTING
VOLUME 9, ISSUE 7 (JULY 2010)

UNCERTAINTY MODELING AND REDUCTION IN MANETS

ABSTRACT
Evaluating and quantifying trust stimulates collaboration in mobile ad hoc networks (MANETs). Many existing reputation systems sharply divide the trust value into right or wrong, thus ignoring another core dimension of trust: uncertainty.
As uncertainty deeply impacts a node's anticipation of others' behavior and decisions during interaction, we include uncertainty in the reputation system. Specifically, we define a new uncertainty model to directly reflect a node's confidence in the sufficiency of its past experience, and study how the collection of trust information affects uncertainty in nodes' opinions.
After defining a way to reveal and compute the uncertainty in trust opinions, we exploit mobility, one of the important characteristics of MANETs, to efficiently reduce uncertainty and to speed up trust convergence.
Two different categories of mobility-assisted uncertainty reduction schemes are provided: the proactive schemes exploit mobile nodes to collect and broadcast trust information to achieve trust convergence; the reactive schemes provide the mobile nodes methods to get authenticated and bring their reputation in the original region to the destination region.
Both of the schemes offer a controllable trade-off between delay, cost, and uncertainty. Extensive analytical and simulation results are presented to support our uncertainty model and mobility-assisted reduction schemes.

======================================================

IEEE TRANSACTIONS ON IMAGE PROCESSING
VOL. 19, NO. 2, FEBRUARY 2010

UNEQUAL POWER ALLOCATION FOR JPEG TRANSMISSION OVER MIMO SYSTEMS

ABSTRACT
With the introduction of multiple transmit and receive antennas in next generation wireless systems, real-time image and video communication are expected to become quite common, since very high data rates will become available along with improved data reliability. New joint transmission and coding schemes that explore advantages of multiple antenna systems matched with source statistics are expected to be developed.
Based on this idea, we present an unequal power allocation scheme for transmission of JPEG compressed images over multiple-input multiple-output systems employing spatial multiplexing. The JPEG-compressed image is divided into different quality layers, and different layers are transmitted simultaneously from different transmit antennas using unequal transmit power, with a constraint on the total transmit power during any symbol period.
Results show that our unequal power allocation scheme provides significant image quality improvement as compared to different equal power allocations schemes, with the peak-signal-to-noise-ratio gain as high as 14 dB at low signal-to-noise-ratios.

INDEX TERMS
Distortion model, joint source-channel coding, JPEG, multiple-input multiple-output systems, unequal error protection, unequal power allocation.

======================================================
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
VOL. 32, NO. X, XXXXXXX 2010

UNSUPERVISED OBJECT SEGMENTATION WITH A HYBRID GRAPH MODEL (HGM)

ABSTRACT
In this work, we address the problem of performing class-specific unsupervised object segmentation, i.e., automatic segmentation without annotated training images.
Object segmentation can be regarded as a special data clustering problem where both class-specific information and local texture/color similarities have to be considered. To this end, we propose a hybrid graph model (HGM) that can make effective use of both symmetric and asymmetric relationship among samples.
The vertices of a hybrid graph represent the samples and are connected by directed edges and/or undirected ones, which represent the asymmetric and/or symmetric relationship between them, respectively. When applied to object segmentation, vertices are superpixels, the asymmetric relationship is the conditional dependence of occurrence, and the symmetric relationship is the color/texture similarity.
By combining the Markov chain formed by the directed subgraph and the minimal cut of the undirected subgraph, the object boundaries can be determined for each image. Using the HGM, we can conveniently achieve simultaneous segmentation and recognition by integrating both top-down and bottom-up information into a unified process. Experiments on 42 object classes (9,415 images in total) show promising results.

INDEX TERMS
Segmentation, graph-theoretic methods, spectral clustering.

======================================================

IEEE TRANSACTIONS ON COMPUTERS

UPPER BOUNDS FOR DYNAMIC MEMORY ALLOCATION

ABSTRACT
In this paper, we study the upper bounds of memory storage for two different allocators. In the first case, we consider a general allocator that can allocate memory blocks anywhere in the available heap space.
In the second case, a more economical allocator constrained by the address-ordered first-fit allocation policy is considered. We derive the upper bound of memory usage for all allocators and present a systematic approach to search for allocation/deallocation patterns that might lead to the largest fragmentation.
These results are beneficial in embedded systems where memory usage must be reduced and predictable because of lack of swapping facility. They are also useful in other types of computing systems.

INDEX TERMS
Dynamic memory allocation, memory storage, storage allocation/deallocation policies, first-fit allocator, garbage collection.

======================================================
IEEE TRANSACTIONS ON MOBILE COMPUTING
JULY 2010 (VOL. 9 NO. 7)

VEBEK: VIRTUAL ENERGY-BASED ENCRYPTION AND KEYING FOR WIRELESS SENSOR NETWORKS

ABSTRACT
Designing cost-efficient, secure network protocols for Wireless Sensor Networks (WSNs) is a challenging problem because sensors are resource-limited wireless devices. Since the communication cost is the most dominant factor in a sensor's energy consumption, we introduce an energy-efficient Virtual Energy-Based Encryption and Keying (VEBEK) scheme for WSNs that significantly reduces the number of transmissions needed for rekeying to avoid stale keys. In addition to the goal of saving energy, minimal transmission is imperative for some military applications of WSNs where an adversary could be monitoring the wireless spectrum.
VEBEK is a secure communication framework where sensed data is encoded using a scheme based on a permutation code generated via the RC4 encryption mechanism. The key to the RC4 encryption mechanism dynamically changes as a function of the residual virtual energy of the sensor.
Thus, a one-time dynamic key is employed for one packet only and different keys are used for the successive packets of the stream. The intermediate nodes along the path to the sink are able to verify the authenticity and integrity of the incoming packets using a predicted value of the key generated by the sender's virtual energy, thus requiring no need for specific rekeying messages. VEBEK is able to efficiently detect and filter false data injected into the network by malicious outsiders.
The VEBEK framework consists of two operational modes (VEBEK-I and VEBEK-II), each of which is optimal for different scenarios. In VEBEK-I, each node monitors its one-hop neighbors where VEBEK-II statistically monitors downstream nodes. We have evaluated VEBEK's feasibility and performance analytically and through simulations.
Our results show that VEBEK, without incurring transmission overhead (increasing packet size or sending control messages for rekeying), is able to eliminate malicious data from the network in an energy-efficient manner. We also show that our framework performs better than other comparable schemes in the literature with an overall 60--100 percent improvement in energy savings without the assumption of a reliable medium access control layer.