Author Archives: jouyamanzali

How Apollo 11′s 1.024MHz guidance computer did a lot with very little

How Apollo 11′s 1.024MHz guidance computer did a lot with very little

 

early 44 years ago computer hardware was in an entirely different place than it is now. The levels of performance don’t even fit on the same scale. The anniversary of the Apollo 11moon landing is upon us, and those brave space pioneers got by without a 3GHz multi-core CPU. The guidance computer used on the Apollo 11 mission ran at only 1.024 MHz.

The moon landing was the height of technological achievement at the time, and some of the rocket technology is still relevant today. That computer, though, has been left in the dust. In fact, it was well and truly obsolete a few years after the landing.

The Intel 8086 came about roughly ten years after the Apollo landing, marking the beginning of x86. Apollo 11’s computer had 4 registers — essentially slots for holding numeric values. The 8086 boosted that to eight 16-bit registers.

The IBM PC XT ran the next version of that chip, the famous 8088. This computer ran at 4.077MHz, which sounds incredibly slow by today’s standards, but is still four times faster than the Apollo 11computer. The XT also packed in eight times the memory used on Apollo 11.

The Apollo 11 guidance computer actually had some impressive software features for a system that didn’t even run a graphical interface. It could multitask up to 8 different operations, but it didn’t work the way we think of multitasking. Today’s operations use preemptive multitasking that allows the OS to distribute computing resources as needed. On Apollo 11, programs were in charge and had to give control back to the OS periodically. A kind of virtual machine was used to allow developers to mix hardware-level instructions with virtual machine commands within the same assembler code.

Inputting commands required translating the English words into “verb noun pairs,” which could be input as numbers. There was a handy sign in the cabin as a reminder. To top it all off, none of the computer’s systems were software upgradeable. All the code was stored in read-only memory. Several years after Apollo 11, Apollo 14 was forced to manually input the code to patch a system malfunction — it took 90 minutes just to type it in.

Maybe your computer is a little sluggish, and your smartphone is a couple years old, but you have it better than those astronauts

Intel Aims to “Re-Architect” Datacenters to Meet Demand for New Services

Intel Aims to “Re-Architect” Datacenters to Meet Demand for New Services
  • Reveals new details of the forthcoming 22nm Intel® Atom™ processors C2000 product family, enabling the company to target a larger portion of the datacenter market.
  • Unveils future roadmap of 14nm datacenter products including a system-on-chip (SoC) that for the first time will incorporate Intel’s next-generation Broadwell architecture to address an even broader range of workloads.
  • Rackspace Hosting* announces that it will deploy a new generation of rack designs as part of its hybrid cloud solutions aligned with Intel’s Rack Scale Architecture vision.

As the massive growth of information technology services places increasing demand on the datacenter, Intel Corporation today outlined its strategy to re-architect the underlying infrastructure, allowing companies and end-users to benefit from an increasingly services-oriented, mobile world.

The company also announced additional details about its next-generation Intel® Atom™ processor C2000 product family (codenamed “Avoton” and “Rangeley”), as well as outlined its roadmap of next-generation 14nm products for 2014 and beyond. This robust pipeline of current and future products and technologies will allow Intel to expand into new segments of the datacenter that look to transition from proprietary designs to more open, standards-based compute models.

“Datacenters are entering a new era of rapid service delivery,” said Diane Bryant, senior vice president and general manager of the Datacenter and Connected Systems Group at Intel. “Across network, storage and servers we continue to see significant opportunities for growth. In many cases, it requires a new approach to deliver the scale and efficiency required, and today we are unveiling the near and long-term actions to enable this transformation.”

As more mobile devices connect to the Internet, cloud-based software and applications get smarter by learning from the billions of people and machines using it, thus resulting in a new era of context-rich experiences and services. It also results in a massive amount of network connections and a continuous stream of real-time, unstructured data. New challenges for networks, computing and storage are emerging as the growing volume of data is transported, collected, aggregated and analyzed in datacenters. As a result, datacenters must be more agile and service-driven than ever before, and easier to manage and operate.

The role of information technology has evolved from being a way to reduce costs and increase corporate productivity to becoming the means to deliver new services to businesses and consumers. For example, Disney* recently started providing visitors with wirelessly connected-wristbands to enhance customers’ in-park experience through real-time data analytics. Additionally, a smart traffic safety program from Bocom* in China seeks to identify traffic patterns in a city of ten million people and intelligently offers better routing options for vehicles on the road.

‘Re-Architecting’ Network, Storage and Servers

To help companies prepare for the next generation of datacenters, Intel revealed its plans to virtualize the network, enable smart storage solutions and invest in innovative rack optimized architectures.

Bryant highlighted Intel’s Rack Scale Architecture (RSA), an advanced design that promises to dramatically increase the utilization and flexibility of the datacenter to deliver new services. Rackspace Hosting*, an open cloud company, today announced the deployment of new server racks that is a step toward reaching Intel’s RSA vision, powered by Intel® Xeon® processors and Intel Ethernet controllers with storage accelerated by Intel Solid State Drives. The Rackspace design is the first commercial rack scale implementation.

The networking industry is on the verge of a transition similar to what the server segment experienced years ago. Equipping the network with open, general purpose processing capabilities provides a way to maximize network bandwidth, significantly reduce cost and provide the flexibility to offer new services. For example, with a virtualized software defined network, the time to provision a new service can be reduced to just minutes from two to three weeks with traditional networks. Intel introduced Open Network Platform reference designs to help OEMs build and deploy this new generation of networks.

Data growth is a challenge to all datacenters and transferring this large volume of data for processing within a traditional, rigid storage architecture is costly and time consuming. By implementing intelligent storage technologies and tools, Intel is helping to reduce the amount of data that needs to be stored, and is improving how data is used for new services.

Traditional servers are also evolving. To meet the diverse needs of datacenter operators who deploy everything from compute intensive database applications to consumer facing Web services that benefit from smaller, more energy-efficient processing, Intel outlined its plan to optimize workloads, including customized CPU and SoC configurations.

As part of its strategy, Intel revealed new details for the forthcoming Intel® Atom™ processors C2000 product family aimed for low-energy, high-density microservers and storage (codenamed “Avoton”), and network devices (codenamed “Rangeley”). This second generation of Intel’s 64-bit SoCs is expected to become available later this year and will be based on the company’s 22nm process technology and the innovative Silvermont microarchitecture. It will feature up to eight cores with integrated Ethernet and support for up to 64GB of memory.

The new products are expected to deliver up to four times1,3 the energy efficiency and up to seven times1,2 more performance than the first generation Intel Atom processor-based server SoCs introduced in December last year. Intel has been sampling the new Intel Atom processor server product family to customers since April and has already more than doubled the number of system designs compared to the previous generation.

Roadmap for Expansion

The move to services-oriented datacenters presents considerable opportunities for Intel to expand into new segments. To help bolster the underlying technologies that power much of the next generation of datacenters, Intel outlined its roadmap of next-generation products based on its forthcoming 14nm process technology scheduled for 2014 and beyond. These products are aimed at microservers, storage and network devices and will offer an even broader set of low-power, high-density solutions for their Web-scale applications and services.

The future products include the next generation of Intel Xeon processors E3 family (codenamed “Broadwell”) built for processor and graphic-centric workloads such as online gaming and media transcoding. It also includes the next generation of Intel Atom processor SoCs (codenamed “Denverton”) that will enable even higher density deployments for datacenter operators. Intel also disclosed an addition to its future roadmap – a new SoC designed from the ground up for the datacenter based on Intel’s next-generation Broadwell microarchitecture that follows today’s industry leading Haswell microarchitecture. This SoC will offer higher levels of performance in high density, extreme energy efficient systems that datacenter operators will expect in this increasingly services-oriented, mobile world.

Ubiquitech Software Expands Globally by Acquiring Blue Crush Marketing Group, LLC, an Innovative Worldwide Marketing Company in a 20 Billion Dollar per Year Industry

Ubiquitech Software Expands Globally by Acquiring Blue Crush Marketing Group, LLC, an Innovative Worldwide Marketing Company in a 20 Billion Dollar per Year Industry

Ubiquitech Software, Inc. (www.ubiquitechsoftware.com) (OTC Pink: UBQU), an International Technology and Services Organization announced today that it has completed a Definitive Agreement to acquire 100% of the equity of Blue Crush Marketing Group, LLC (http://www.bluecrushmarketing.com), a dynamic multi-media, multi-faceted corporation utilizing state-of-the-art global Internet marketing, plus Direct Response (DRTV) Television, Radio, and traditional Internet marketing Additionally, AffiliateCashExpress.com (www.AffiliateCashExpress.com), was recently developed to quickly take advantage and capitalize on the popular business model that is growing throughout the world, CPA Advertising. The two companies have been conducting requisite due diligence since the signing of a letter-of-intent in June. The Blue Crush management team will assume management of the business on completion to expand their business and grow their platforms worldwide.

James Ballas, CEO of Blue Crush Marketing Group, commented, “I am very pleased to reach this stage of our transaction. We are excited to move forward with Ubiquitech Software and take our business platforms to the next level. The Direct Response TV and radio industry is an over 20 Billion dollar per year industry and BCMG exists to capture a segment of this lucrative and ever expanding market. We are particularly excited about our newest division, AffiliateCashExpress.com, as we enter one of the fastest growing strategies in advertising today.”

About Ubiquitech Software, Inc.

Ubiquitech Software, Inc., through its newly acquired subsidiary Blue Crush Marketing Group, LLC, is a dynamic multi-media, multi-faceted corporation utilizing state-of-the-art global Internet marketing, plus Direct Response (DRTV) Television, Radio, and traditional Internet marketing, to drive traffic to the new and emerging multi-billion dollar industries.

This press release contains forward-looking statements. Words such as “expects,” “intends,” “believes,” and similar expressions reflecting something other than historical fact are intended to identify forward-looking statements, but are not the exclusive means of identifying such statements. These forward-looking statements involve a number of risks and uncertainties, including the timely development and market acceptance of products and technologies, the ability to secure additional sources of finance, the ability to reduce operating expenses, and other factors described in the Company’s filings with the OTC Markets Group. The actual results that the Company achieves may differ materially from any forward-looking statement due to such risks and uncertainties. The Company undertakes no obligation to revise or update any forward-looking statements in order to reflect events or circumstances that may arise after the date of this release.

Yahoo Acquisition of Chinese Startup

Yahoo Acquisition of Chinese Startup

Beijing – Yahoo Inc. back shopping. A pioneering company alias startup from Beijing so the target. Is Zletic, a company that provides data analysis services purchased Yahoo social network on Thursday (July 18, 2013). However, the acquisition value is not known how many.
In its website pages, Zletic was pleased to acquire this. Founder Zletic is Hao Zheng, a former employee of Yahoo China. Hao Zletic founded only a year ago.
One of Yahoo spokesperson stated, with this acquisition, the eight employees Zletic will be brought to the office of research and development of Yahoo in Beijing.
This is Yahoo’s acquisition of the 19th under the control of Marissa Mayer. Yahoo under control Mayer likes shopping startup. Mayer wants Yahoo to focus on the realm of mobile and new generations still.
Some time ago Yahoo acquired wide startup Summly, Xobni and Tumblr. Tumblr purchased with cash worth U.S. $ 1.1 billion.

Review: First 8-inch Windows tablet is a device that shouldn’t exist

Review: First 8-inch Windows tablet is a device that shouldn’t exist

My dissatisfaction with PC OEMs is something I have documented in the past. They offer a confusing array of products and tend to cut corners in the worst ways imaginable. The OEM response to Windows 8 has been to produce a wide range of machines sporting novel form factors to fit all sorts of niches, both real and imagined.

One niche that the OEMs haven’t tried to fill, however, has been sub-10-inch tablets. That’s not altogether surprising. Microsoft designed Windows 8 for screens of 10 inches or more, and initially the operating system’s hardware requirements had a similar constraint.

That decision looked a little short-sighted after the success of tablets such as the Google Nexus 7 and the iPad mini. Accordingly, Microsoft changed the rules in March, opening the door to a range of smaller Windows tablets.

The Acer Iconia W3 is the first—and currently the only—8-inch Windows tablet. That attribute alone makes it in some sense noteworthy. Sadly, it’s about the only thing that does.

Spec-wise, this is another Intel Clover Trail tablet, and its internals are basically the same as the devices that launched last year (such as its bigger brother, the Acer Iconia W510). This means 1.8 GHz, dual core, four thread Intel Atom Z2760 CPU, 2 GB RAM, 64 GB flash storage (which with Acer’s default partitioning leaves a little over 29 GB usable), front and rear cameras, Bluetooth 4.0, and 802.11b/g/n (no 5 GHz support). There’s a micro-HDMI and micro-USB port for external connectivity (a separate cable converts the micro USB port into a full-size one), along with an SD card slot. The tablet has a speaker adequate for notification sounds but little more.

As a result, performance and battery life are similar to what we’ve seen before. The Iconia W3 comes equipped with full-blown Windows 8, unlike ARM tablets, so it can run any 32-bit Windows application—should you really want to. Clover Trail’s GPU performance is such that games and other graphics-intensive programs won’t run well, however.

Eight inches of horror

The new bits on this tablet are really the screen and the size.

Screens are important. We spend essentially all our time interacting with devices looking at screens. Cost-cutting on screens is unforgivable, as a bad screen will damage every single interaction you have with the device. This goes doubly so for tablets, where the screen works not only as an output device but also as the primary input device.

The Acer Iconia W3’s screen is a standout—because it is worst-in-class. I hated every moment I used the Iconia W3, and I hated it because I hated the screen. Its color accuracy and viewing angles are both miserable (whites aren’t white—they’re weirdly colorful and speckled). The screen has a peculiar grainy appearance that makes it look permanently greasy. You can polish as much as you like; it will never go away. The whole effect is reminiscent in some ways of old resistive screens.

It’s hard to overstate just how poor this screen is. At any reasonable tablet viewing distance, the color of the screen is uneven. The viewing angle is so narrow that at typical hand-held distances, the colors change across the width of the screen. At full arm’s length the screen does finally look even, but the device is obviously unusable that way.

Acer has clearly skimped on the screen. I’m sure the panel in the W3 was quite cheap, and that may be somewhat reflected in the unit’s retail price ($379 for a 32GB unit, $429 for this 64GB one—putting it at the same price as the 32GB iPad mini, which has a comparable amount of available disk space), but who cares? It doesn’t matter how cheap something is if you don’t want to use it at all.

This poor screen quality isn’t a question of resolution, either. 1280×800 is not a tremendously high resolution, but text looks crisp enough. At 186 pixels per inch, 1280×800 feels more or less OK for this size of device.

The low resolution does, however, have one significant drawback: it disables Windows 8’s side-by-side Metro multitasking, which requires a resolution of at least 1366×768. The W3’s screen is 86 pixels too narrow, so the Metro environment is strictly one application at a time.

This is an unfortunate decision. The side-by-side multitasking is one of the Metro environment’s most compelling features. Keeping Twitter or Messenger snapped to the side makes a lot of sense and works well. I’ve never used Windows 8 on a device that didn’t support side-by-side Metro multitasking before, and I don’t ever want to again.

Size-wise, the W3 may be small for a Windows tablet, but it’s not exactly small. It’s fat. The W3 is 11.4 mm thick. The iPad mini, in comparison, is 7.2 mm thick. The Iconia W3 is also heavy at 500 g; the iPad mini, in comparison, is 308 g. That makes the W3 more than 50 percent thicker and more than 50 percent heavier.

The thickness makes the lack of a full-sized USB port on the device more than a little confusing. There’s certainly room for a full USB port, and a full port would be more convenient than the dongle. But for whatever reason, Acer didn’t give us one.

The device itself feels solid enough, albeit plasticky. It doesn’t exude quality, but it’s a step or two up from the bargain basement.

Keyboard non-dock

The W3 also has a keyboard accessory. As is common for this kind of thing, the keyboard has no electrical connection to the tablet. It’s a Bluetooth keyboard powered by a pair of AAA batteries. It has a groove along the top that can hold the tablet in both landscape and portrait orientations and a clip on the back that lets you use the keyboard as a kind of screen protector.

The keyboard has to be manually paired to the tablet. It’s more or less full-size, with a reasonable key layout. It’s a typical mediocre keyboard. The feel is a little on the squishy side, lacking the crispness of, for example Microsoft’s Type Cover for its Surface tablets. It’s better than any on-screen keyboard, and to that extent it does its job. But it’s a long way from being an actually good keyboard.

The groove does hold the tablet up, and on a level surface the unit doesn’t topple over, but it’s not as satisfactory as some of the hinged keyboard/docks we’ve seen on other devices. Tilt the base while carrying it or using it on your lap and the tablet is liable to fall out.

Crank Software Selects GrammaTech to Turn Up Software Quality and Security

Crank Software Selects GrammaTech to Turn Up Software Quality and Security

GrammaTech, Inc., a leading software developer specializing in software assurance tools, today announced that Crank Software, Inc., an innovator of embedded graphical user interface (GUI) solutions, is using GrammaTech’s CodeSonar to advance the integrity of their code.

Crank Software’s products and services enable R&D teams and user interface (UI) designers to quickly and collaboratively develop rich, animated UIs for resource-constrained embedded devices. These embedded software solutions are used in safety-critical applications, such as animated global positioning systems, in-car graphical displays and user interfaces on factory floors, so software quality and security are paramount. To enhance these areas, the team at Crank is now using CodeSonar’s advanced static analysis capability to more efficiently find and fix quality and security issues within their code.

“We wanted an innovative, high-performance static analysis tool we could drop into our process and quickly see improvements,” explained Thomas Fletcher, VP of Research and Development at Crank Software. Now that Crank’s development teams have integrated CodeSonar into their production process, quantifiable results have reinforced their choice to adopt the powerful tool. “Issues are being caught and fixed very early in the coding process. I look at these as problems I won’t have to hassle with in QA, and most critically, calls to customers I will never have to make,” he said.

CodeSonar provides Crank’s team with a high quality solution that integrates well and allows Crank’s engineers to fix problems early in development, saving time. As a result, they’ve also improved their end product and Crank Software is now feeling better-positioned for the certifications it wants to achieve to drive greater adoption. As Fletcher explained, “We wanted a comprehensive tool to push the quality and security of our software forward. And we got exactly what we aimed for.”

For a more detailed look into Crank Software’s adoption of CodeSonar, view the case study.

About GrammaTech and CodeSonar:
GrammaTech’s static analysis tools are used worldwide by Fortune 500 companies, educational institutions, startups and government agencies. The staff includes 15 PhD experts in static analysis and a superb engineering team, all focused on creating the most innovative and in-depth analysis algorithms. The company’s flagship product, CodeSonar, is a sophisticated static analysis tool that performs a whole-program, interprocedural analysis on C/C++, Java and binary code, identifying complex programming bugs that can result in serious reliability or security problems. More information about CodeSonar can be found on our website at

Make OS X Faster & Efficient

Make OS X Faster & Efficient

Because created and designed by one company, hardware and software from Apple can work optimally and efficiently. One of the advantages are when we install OS X, we do not need to fuss looking for drivers for the Mac that we use.

Moreover, because the security system, not many viruses that successfully made to screw up OS X, though there is still malware that managed to infiltrate. It was not to screw up OS X, but rather aims to steal sensitive data or personal in the Mac.

While it is arguably efficient, we can still fiddling with that OS X running faster again.
Here are some things we can do:

Hardware side

T02-436-111-make. Upgrading memory up stuck. Output for Macbook Pro 2012 to the above, this is not possible, because the memory is soldered directly to the motherboard. Even so great a minimum of memory available is quite capable, which is 4 GB. 4 GB of memory is the minimum for all major applications in both heavy-and could not walk comfortably. The bigger the memory, the better.

2. Upgrade the hard drive. Mac can walk comfortably if enough free space available in the T02-436-make-12partisi existing OS X. In order to walk comfortably, should be available at least 30 GB of free space. If you use a heavy application sorts PhotoShop, AutoCAD, or the other, there are at least 80-120 GB of free space on my hard drive. So get used to the data stored on the internal hard drive external hard drives that can have empty space relief. If forced, upgraded hard drive to a larger capacity. If you can, use a hard drive upgrade to SSD. It is still expensive, but the performance is much more hurried.

 

Software side

T02-436-21a1-making. Repair Disk Permissions. Actually this procedure is to check whether the file status in accordance with the original. File status could change as the applications that use them have problems. As a result, the Mac can run slow or weird, such as the application of a sudden go out alone, running slower, slower booting, and other sebagainya.Caranya quite easily, run Disk Utility> click on the existing partition OS X it> click repair permission and wait until the process is complete. If you’ve done repair permissions, more afdol followed by a second step.

T02-436 T02-make-436-21b-21c-making

T02-436-22-fetched

2. Reset NVRAM. Non-Volatile RAM or NV-RAM is an OS X application and keep most of the settings that we often use. The goal is that we run the application on the condition that we use. But at a certain point, the NV-RAM will be full and will actually make the Mac went awry. How to reset the NV-RAM is quite straightforward, we turn on the computer and then we immediately pressing Cmd + alt + P + R and press and hold until dueng Mac emits three times.

3. Provide enough free space on the disk / partition OS X is. The trick is to move the data to another hard drive and remove any unwanted applications. With so many free apps available on the AppStore, maybe some of us a lot of the “greedy” to install the free application. In the end, many of these applications are not suitable for us and ultimately make the hard drive so full. Most applications are indeed living discard the application file to the trash. But it does not hurt to uninstall the application using software like CleanApp, AppZapper, CleanMyMac, etc..

T02-436-244-make. Clean desktop. Try not store any files on your desktop as this will slow down the boot. Each boot, OS X will prepare a file on the desktop that can be accessed at any time. If only one or two files it may seem, but we would feel if a desktop full of files.

 

T02-436-255-make. Delete unneeded languages. Mac provides multiple language that is easy to use by people from various countries. Surely we do not need all of the existing language. If only need English only, we better just delete the language that we do not need, especially when they take a big enough hard drive space. The easiest way is to use Monolingual which can be downloaded from http://www.monolingual.sourceforge.net. The trick is easy, just check the language that you want to delete and click remove.

T02-436-26a6-making. Turn off animation. When running the application, mac will display an animation that gives the impression that the application is open from the icon that we double-click the. When the mouse over the application icon in the dock line, the application automatically enlarges. Such animations can we Disable that Mac go faster. Run the System Preferences> Dock> uncheck the choice of animate opening applications. Also do not use the option of changing the existing wallpaper in System Preferences> Desktop & Screen Saver.

T02-436-make-26b

 

T02-436-27-fetched

7. Set the Startup Item. Some immediate applications come running when OS X booting. Check if there are applications that do not really matter that went along with running. I run the System Preferences> Users & Groups> and check the applications that do not want to run at startup.

 

Mac Monitor Every Time

After performing such optimization step above, of course, we should be able to monitor the Mac that is always in top condition. Some steps we can do include:

T02-436-31-fetched

 

1. Activity Monitor running. This application provides a complete and detailed information firsthand what is happening on the Mac. So when Mac was running slow and do not know why, we can run the Activity Monitor and find out.

a. Button to shut down the application. Once we click on the application process takes a wild and uncontrolled, click the button to turn it off.

b. Percentage of CPU used by an application. Applications that run normally just use cpu power is not more than 10%. More than that, the application will be considered illegal and should be turned off. Applications typically do not have errors and uncontrolled eating up 90% cpu power. No wonder Mac can be slow.

c. This is the part where we can see the other processes of the cpu, such as memory, disk activity, disk in use, and network. Some of them are equipped with a chart showing the current work process.

T02-436-32-fetched

 

2. iStat Menus. This application can display the information that we need in the menu bar, so we only glance we can determine the condition of the current cpu. This application can be purchased at macupdate.com costs 25 dollars.

 

T02-436-333-make. Daisy Disk. Very useful for hard drive clean up when we needed space. Daisy Disk folder instantly displays the big size, so we know which folder has a large file. Can be purchased in the App Store for USD. 95 thousand.

 

T02-436-344-make. Memory Clean. As we know, that we run most of the applications will be put into memory. The problem is, when we close the application, most will still be stored in memory. Over time the memory will be full and the computer was so slow. Used to be the only way to clear this memory just by turning off the computer or restart. Now we can use the Memory Clean with one click direct memory clean. Purge memory several times in order to get maximum results. Download it for free in the App Store.

Yahoo! Acquisition AdMovate, Develop Mobile Advertising Service

Yahoo! Acquisition AdMovate, Develop Mobile Advertising Service

Internet giant head of Marissa Mayer, Yahoo!, recently rumored to re-acquire a startup working in the areas of mobile advertising, AdMovate. The umpteenth time that the acquisition by Yahoo! Yahoo! is regarded as an effort to improve service advertisements that are considered “lackluster” lately.

Yahoo! via his blog on Tumblr, has officially announced the purchase AdMovate engaged in the mobile advertising services. AdMovate parties themselves have confirmed this issue by stating that they were aimed at helping advertisers to reach consumers at the right time and place via private message certainly be provide by AdMovate.

Quoted from a news release The Next Web today (18/7), Yahoo! states that carried AdMovate personalization technology can improve the ability of Yahoo! in advertising through the mobile platform. In addition, after the acquisition of all employees AdMovate instantly brought to the Yahoo! offices are located in Silicon Valley, USA.

Marissa Mayer as CEO of Yahoo!, concerning this acquisition had expressed interest in re-focusing the Yahoo! mobile services that could be left behind. According to him, the future of Yahoo! ‘s business models will be on the mobile segment in which this segment continues to experience a significant increase, “Yahoo’s future is on the phone. So we put the products for mobile phones, “he said.

In a blog post on tumblr, Scott Burke who is SVP of Display Advertising and Advertising Technology Yahoo! said Yahoo! is now trying to focus on investments in the mobile segment, “Yahoo is currently investing more in the purchase program and advertising on mobile phones,” she called.

Description Scott was indeed not a hoax. Yahoo! is just a period of four months was reported to have acquired 10 startup that Summly, Astrid, Milewise, Loki Studios, Go Poll Go, PlayerScale, Rondee, Ghostbird Software, Tumblr, and most recently Qwiki, and it is almost entirely a startup working in the mobile field. With the acquisition AdMovate which is a provider of mobile advertising services, then shopping “wholesale” a la Yahoo! The Yahoo! could be an attempt to break through the mobile industry is growing rapidly these days.

5 Coding Hacks to Reduce GC Overhead

5 Coding Hacks to Reduce GC Overhead

In this post we’ll look at five ways in roomates efficient coding we can use to help our garbage collector CPU spend less time allocating and freeing memory, and reduce GC overhead. Often Long GCs can lead to our code being stopped while memory is reclaimed (AKA “stop the world”). Duke_GCPost

Some background

The GC is built to handle large amounts of allocations of short-lived objects (think of something like rendering a web page, where most of the objects allocated Become obsolete once the page is served).

The GC does this using what’s called a “young generation” – a heap segment where new objects are allocated. Each object has an “age” (placed in the object’s header bits) defines how many roomates collections it has “survived” without being reclaimed. Once a certain age is reached, the object is copied into another section in the heap called a “survivor” or “old” generation.

The process, while efficient, still comes at a cost. Being Able to reduce the number of temporary allocations can really help us increase of throughput, especially in high-scale applications.

Below are five ways everyday we can write code that is more memory efficient, without having to spend a lot of time on it, or reducing code readability.

1. Avoid implicit Strings

Strings are an integral part of almost every structure of data we manage. Being much heavier than other primitive values, they have a much stronger impact on memory usage.

One of the most important things to note is that Strings are immutable. They can not be modified after allocation. Operators such as “+” for concatenation actually allocate a new String containing the contents of the strings being joined. What’s worse, is there’s an implicit StringBuilder object that is allocated to actually do the work of combining them.

For example –

1
a = a + b; / / a and b are Strings
The compiler generates code comparable behind the scenes:

1
StringBuilder temp = new StringBuilder (a).
2
temp.append (b);
3
a = temp.toString () / / a new string is allocated here.
4
/ / The previous “a” is now garbage.
But it gets worse.

Let’s look at this example –

1
String result = foo () + arg;
2
result + = boo ();
3
System.out.println (“result =” + result);
In this example we have 3 StringBuilders allocated in the background – one for each plus operation, and two additional Strings – one to hold the result of the second assignment and another to hold the string passed into the print method. That’s 5 additional objects in what would otherwise Appear to be a pretty trivial statement.

Think about what happens in real-world scenarios such as generating code a web page, working with XML or reading text from a file. Within a nested loop structures, you could be looking at Hundreds or Thousands of objects that are implicitly allocated. While the VM has Mechanisms to deal with this, it comes at a cost – one paid by your users.

The solution: One way of reducing this is being proactive with StringBuilder allocations. The example below Achieves the same result as the code above while allocating only one StringBuilder and one string to hold the final result, instead of the original five objects.

1
StringBuilder value = new StringBuilder (“result =”);
2
value.append (foo ()). append (arg). append (boo ());
3
System.out.println (value);
By being mindful of the way Strings are implicitly allocated and StringBuilders you can materially reduce the amount of short-term allocations in high-scale code locations.

2. List Plan capacities

Dynamic collections such as ArrayLists are among the most basic dynamic structures to hold the data length. ArrayLists and other collections such as HashMaps and implemented a Treemaps are using the underlying Object [] arrays. Like Strings (Themselves wrappers over char [] arrays), arrays are also immutable. Becomes The obvious question then – how can we add / put items in their collections if the underlying array’s size is immutable? The answer is obvious as well – by allocating more arrays.

Let’s look at this example –

1
List <Item> <Item> items = new ArrayList ();
2

3
for (int i = 0; i <len; i + +)
4
{
5
Item item = readNextItem ();
6
items.add (item);
7
}
The value of len Determines the ultimate length of items once the loop finishes. This value, however, is unknown to the constructor of the ArrayList roomates allocates a new Object array with a default size. Whenever the internal capacity of the array is exceeded, it’s replaced with a new array of sufficient length, making the previous array of garbage.

If you’re executing the loop Welcome to Thunderbird times you may be forcing a new array to be allocated and a previous one to be collected multiple times. For code running in a high-scale environment, these allocations and deallocations are all deducted from your machine’s CPU cycles.
%0

Attend Meeting C++ 2013

Attend Meeting C++ 2013

Boost Dependency Analyzer

I have something special to announce today. A tool I’ve build over the last 2 weeks, which allows to analyze the dependencies in boost. With boost 1.53 this spring, I had the idea to build this, but not the time, as I was busy writing a series over the Papers for Bristol. Back then I realized, how easy it could be to build such a tool, as the dependencies could be read & listed by boosts bcp tool. I already had a prototype for the graphpart from 2010. But lets have a look at the tool:

The tool is very easy to handle, it is based on the out of bcp, which is a tool coming with boost. Actually bcp can help you with ripping libraries out of boost, so that you don’t have to add all of boost to your repository when you would like to use smartpointers. But bcp also has a listing mode, where it only shows the dependencies thats whats my tool build up upon. Lets have a short look at the results, the dependencies of boost 1.54:

A few words on how to read this graph. The libraries in the middle of the “starshape” are the ones with the most dependencies, each line between the nodes is a dependency. A dependency can be one or multiple files. The graphlayout is not weighted.

How to

A short introduction on what you need to get this tool to run. First boost, as this tool is build to analyze boost. I’ve tested with some versions (1.49 – 1.54) of boost. You also need a version of bcp, which is quite easy to build (b2 tools/bcp). Then you simply need to start the tool, if BOOST_ROOT is set, the tool will try to read it, other wise you will be asked to choose the location of boost when clicking on Read dependencies. Next thing is selecting the location of bcp. That is the setup, and the tool will now run for some time. On my machine its 90 seconds to 2 minutes the analysis takes, it might be lot longer on yours, depending on how much cores you got. The tool will spawn for each boost library (~112) a bcp process, and analyze this output in a thread pool. After this is done, the data is loaded into the tool, and then saved to a SQLITE database, which will be used if you start the tool a second time and select this version of boost. Loading from the database is far faster.

A screenshot to illustrate this:

tl_files/blog/bda/bda.png

To the left are all the boost libraries, the number of dependencies is shown in the braces. To the right is a Tabwidget showing all the dependencies, the graph is layouted with boost graph. When you click on show all you’ll get the full view of all dependencies in boost. The layouting is done in the background, so this will take some time to calculate, and is animated when its done. The results of the layouting are good, but not perfect, so that you might have to move some nodes. Exporting supports images, which are transparent PNGs, not all services/tools are happy with that (f.e. facebook, twitter nor G+ could handle the perfectly fine images), this can be fixed by postprocessing the images and adding a white background.

Inner workings

I’ve already written a little about the tools inside, its build with Qt5.1 and boost. Where boost is mostly used for the graph layouting. As I choose to work with Qt5, it has a few more dependencies, for windows this sums up to a 18 mb download, which you’ll find at the end. The tool depends on 3 libraries from my company Code Node: ProcessingSink, a small wrapper around QProcess, that allows to just start a bunch of processes, and lets you connect to the finished and error slot. This was necessary, as I could only spawn 62 parallel processes under windows, so this library does take care of spawning the parallel processes now. Which are currently 50 at a time. GraphLayout is the code that wraps the innerworkings of boost::graph, its a bit dirty, but lets me easily process the graphlayouting. The 3rd library is NodeGraph, which is the Graph UI, based on Qts GraphicsView Framework.
I plan to release the tool and its libraries under GPL later on github, for now I don’t have the time to polish everything.

Problems

One of the earliest questions I had when thinking about building such a tool, was where to get a list of the boost libraries? This sounds easy. But I need to have this readable by machine, not human, so HTML is a great format, but I refused to write a parser for this list yet. I talked to some people about this at C++Now, and most agreed, that the second option would be best: maintainers.txt. Thats what the tool reads currently to find the boost libraries. Unfortunately at least lexical_cast is missing in this list. So, the tool isn’t perfect yet, while lexical_cast is already patched, I’m not sure if anything else is missing. A candidate could be signals, as its not maintained anymore. Currently the tool analyzes for 1.54 112 libraries.

boost dependencies

Working for 2 weeks on this tool has given me some inside knowledge about the dependencies in boost. First, the way it is shown in the tool, is the view of bcp. Some dependencies will not affect the user, as they are internal. f.e. a lot of libraries have a dependency to boost::test, simply because they provide their tests with it. The bcp tool really gets you ALL the dependencies. Also most (or was it all?) libraries depend on boost::config. I plan to add filtering later, so that the user has the ability to filter some of the libraries in the GraphView.

The tool

Here is how to get the tool for now: there is a download for the binaries for windows and linux. I’ll try to get you a deb package as soon as I have time, but for now its only the binaries for linux, you’ll have to make sure to have Qt5.1 etc. on linux too, as I do not provide them. For Windows, its 2 archives you’ll need to download: the programm itself, and needed dlls for Qt5.1 if you don’t have the SDK installed ( in this case you also could copy them from the bin directory)

Note on linux: this is a one day old beta version. Will update this later.