As a parent I understand that gadgets like the Milk Nanny will be forgotten – like the Diaper Genie and the no-pacifier rule – five months into the severe sleep-deprivation experiment that is parenthood. As a gadget freak, however, I do think it’s pretty cool.
It’s a simple device. The system holds an amount of formula and water. When you need formula you simply press a button and it squirts out, pre-warmed and ready to drink. You can control the system from your phone and even tell the device which formula you are using, allowing you to see nutritional information and quality on the fly. You can then see exactly when the wee one ate and how much, ensuring that parents to obsessively track their infants’ food intake like a Honda Civic owner obsessed with gas milage can do so with ease.
The device starts at $149 and will start shipping in May. While it is trivial to mix formula, doing it in a pre-dawn haze while a small human screams in your arms is not trivial. Therefore something like Milk Nanny is an interesting addition to the pantheon of baby gear so familiar to those who have friends who are headed to Doula XPress, Baby’s Organic Items R’ Us, and The Conflicting Information Bookstore For New Parents. I full expect things like the Milk Nanny to hit registry lists over the next few years, right next to the wipes warmers (which are actually nice in the winter) and those little rubber giraffes.
Article source: http://feedproxy.google.com/~r/Techcrunch/~3/0SNrS1PShcA/
MERCEDES AMG PETRONAS Gets Back to Work with the new F1 W06 Hybrid Silver Arrow
The MERCEDES AMG PETRONAS Formula One Team today unveiled its 2015 Formula One World Championship challenger, the F1 W06 Hybrid, ahead of the first day of pre-season testing at the Circuito de Jerez in Spain. Drivers Lewis Hamilton and Nico Rosberg revealed the seventh Silver Arrow to compete in the Formula One World Championship to the assembled media, with Nico then taking the wheel for the opening test session of the 2015 season at the Spanish circuit.
From Revolution to Evolution
2015 marks the beginning of an evolutionary process for the cutting-edge Hybrid racing formula introduced for 2014. But the word evolution shouldn’t mask the scale of the change this represents: the regulatory requirements and opportunities, coupled with ambitious internal performance targets, mean that this winter has been as challenging as ever. The more stable the rules, the more innovative a team must become to deliver performance from each generation of racing machine.
“From outside, winter might look like a rest time in Formula One. But this couldn’t be further from the truth”, explains Toto Wolff, Head of Mercedes-Benz Motorsport. “Long before the end of last season, development work on our 2015 car became the main priority. The winter months are the most intense, with everybody at Brackley and Brixworth working around the clock to prepare for the season ahead. Their dedication is inspiring. It would be so dangerous to rest on our laurels after 2014 – but none of us have any sense that things will be easier now. On the contrary, our motivation is as high as ever. There’s a famous Babe Ruth quote I always say to people: “Yesterday’s home runs don’t win today’s games”. We are fully aware that success is not only hard won but can also be short-lived without the right approach.
“A lot of things have been put in place – not by one individual but by many individuals working as a team. And this has been about taking the right decisions at the right time, trying to avoid mistakes and, if they do happen, quickly analysing why. Even if, on paper, things are looking good, we will not fall into the trap of being overly optimistic or trying to make crystal ball predictions because this is not how the sport works. In fact, this is possibly the most brutally honest sport because the stopwatch never lies.
“The launch of a new car is an exciting moment for everybody involved. To see the product of all that hard work turn a wheel for the first time is a great source of pride – and rightly so. However, there is a long road ahead before we reach the first race. We must build a fast, reliable racing car to be able to compete with rivals whose ability and competitiveness can never be underestimated. At this point, every team is on zero points. The only thing we know for sure is that we have consistency in our driver line-up for the third year running and they know what’s expected of them, just as we know what they expect from us. We’re looking forward to seeing them push each other and the team forward. It will be another fascinating battle between the two, but we must wait until Melbourne until we know if we have provided them with a car that allows them to fight each other for wins.”
As the age of Hybrid power evolves, so the requirements for improved safety, reliability, efficiency and performance increase. An intensive design phase combining incremental gains and targeted innovation has produced the F1 W06 Hybrid, a car which delivers mechanical, structural, aerodynamic and weight saving developments over its predecessor, the F1 W05 Hybrid.
“The key factor from our perspective is avoiding complacency”, says Executive Director (Technical), Paddy Lowe. “Expectations are now high and a lot of assumptions are being made about our potential this season. Internally, however, we are fully aware that you can never afford to stand still in any sport – particularly Formula One. We are up against competitors with a great history of success and, like us, they will not be content unless they are winning. As the old motor racing adage goes, you are only as good as your last race. This time last year, with the new Hybrid technology yet to hit the track, we went to Silverstone for a filming day and were genuinely surprised to see the car drive out of the garage! Second time around we may be over the initial hurdles of the new formula, but we keep them fresh in our minds, as it demonstrates that nothing can be taken for granted. The only thing that is in our control is the ability to do the very best we can in every area.
Of course, like every other team on the grid, we have been pushing harder than ever to find areas for performance gains. But at the same time, we must ensure we are moving in the correct direction. One of the risks with car development is that attempting forward steps can easily turn into rearward steps. You have to take risks to progress – but those risks must be carefully managed in order to produce a car that is better than its predecessor. This has been an underlying theme for the team over the winter.
“It is an evolutionary process and this also includes the regulations themselves. Relative to last winter, these have remained reasonably stable into 2015. But this is certainly not to say that the cars we see take to the track in Jerez will be near-replicas of their predecessors. Some changes will be more visually obvious, of course, but the devil is in the detail. Beneath the covers there have been a raft of developments from both a chassis and Power Unit perspective – all aimed at creating a car that is safer, more efficient, more reliable and ultimately faster. With the Hybrid era still very much in its infancy, there is plenty of scope for innovation. The challenge at this stage is to find the key areas for performance gain based not just on what we have learned a year further down the line, but also on where there is room for exploring new and innovative sources of competitive advantage.”
Editor’s note: Nick Rojas is a business consultant and writer who lives in Los Angeles and Chicago. He has consulted small and medium-sized enterprises for over twenty years. John Siegel is a writer and hockey enthusiast.
Hockey is a naturally aggressive sport, and the casual fan has learned to associate it with violence — the kind that makes the daily sports highlight segment on the news.
But like other sports, there are subtle nuances to the game that are lost in between the bare-knuckle brawls and bone-jarring hits. Hockey is the ultimate team sport in the regard that players are more than willing to sacrifice notoriety — and the physical well-being — for the betterment of the organization.
Whether it’s Mark Recchi’s kidney stone surgery (in addition to broken ribs) on the morning of game 7 of the Stanley Cup Final, or Sebastien Courcelles having his face slashed open with a skate, there are countless instances — both obvious and hidden — that show how the players exhibit a toughness that’s rarely matched in an athletic setting.
The business of sports, however, can’t quantify the toughness of its players. This is where hockey finds common ground among most other professional sports. While hockey players tend to their wounds and clean up the blood, the front office will likely have another team working within a discipline that can also be described as unforgiving, beautiful and even brutal, but for very different reasons. At least that’s how scientists sometimes describe math.
What’s at work is an applied science, and as usual, science rarely gets it right the first time. In this case, it’s also about getting a lot of help from people in big data and analytics from overlapping fields. What’s really new about hockey and analytics can be found at the very beginning and end of the process, namely what data should be collected and in what form the results should be expressed.
As with baseball and Earnshaw Cook, players (that is, players in the game of hockey analytics) are emerging with ideas about effectively converting raw data into decisions for a winning team.
So what are teams doing to actually produce results? Although the objective is simple, strategies for getting there can vary from team to team. Kyle Dubas, the young assistant GM of the Maple Leafs, created a hockey “research and development” team. It includes a chemical engineer and a mathematician. Their only task is to apply science and engineering to hockey statistics, resulting in more wins for the team.
In addition to in-house talent, teams will be working with academic institutions and companies that specialize in big data and analytics. One such company is SAS Analytics, which does about $3 billion in business annually. They use big data to help banks, telecommunications companies and even governments. Although most of their work has been done outside of sports, the datasets will often be treated in similar ways if compared to other projects.
The convergence of math, computer science, information technology and (as always) the Internet of Things (IoT) will also drive the end result. Whether it’s a large corporation or a fan, both professional and amateur analysts will likely be building on existing ideas to varying degrees. HARO, HARD and HART ratings are examples of simple calculations currently in use, where:
HARO = Hockey Analysis Rating Offense
HARD = Hockey Analysis Rating Defense
HART = Hockey Analysis Rating Total
The foundation of good results is good data. People like Corsi and Fenwick (for example) have made a name for themselves by being among the first to develop some usable shot-based analytics. In simplest terms, the analysis amounts to a tally of shots along with the outcome of each shot. Compared to some of the newer ideas, it’s not especially sophisticated, but it’s a good start.
When applied, it looks something like this:
A player (player “A”) is on the ice for 10 shots during a game. During that time interval, the opposing team takes three shots:
Corsi For (CF) = 10
Corsi Against (CA) = 3
Player “A” would be a +7 Corsi (10 – 3 = 7) for the game
It’s important to note that typically, shots in the game of hockey are only counted in the box score if they make their way to the goalie, who either saves the shot or lets in a goal. Corsi and Fenwick take into account all shots direct towards the net, which also includes shots that miss their mark or are blocked by an opponent.
Let’s say that during the time that player “A” was on the ice, 10 shots were made, two were blocked by players on the opposing team. The opposing team had three shots during that same time interval, but one was blocked. Fenwick excludes blocked shots from the formula, so the numbers for player “A” would look like this:
Fenwick For (FF) = 8 (10-2)
Fenwick Against (FA) = 2 (3-1)
Player “A” would be a +6 Fenwick (8 – 2 = 6) for the game.
To make the data easier to use, statisticians express the results as a percentage. In this case:
CF% (Corsi For Percentage) = CF/(CF+CA) = 76.9%
FF% (Fenwick For Percentage) = FF/(FF+FA) = 80%
It’s important to appreciate the volume of information. Player data can amount to millions of data points. The quest for accurate data and powerful analytics in sports will change shots into goals, and goals into cash. At least, thats what the teams are banking on. In addition, predictive analytics may also be used to anticipate injuries.
There’s evidence to support a correlation between winning teams and the numbers they produce during the season. For example, The Los Angeles Kings were the 2012 and 2014 Stanley Cup winners. Their CF% and FF% are as follows:
A significant roadblock to the widespread implementation of “fancy stats” around the NHL lies in the way that the game has been taught — and subsequently, played — as its popularity grew over the course of the 20th century.
Frankly, the machismo of the game stands firmly entrenched in front of the evolution of the methods by which it is played. Enforcers, long used to protect smaller, more talented players, are slowly but surely being phased out — as is the act of fighting — so that more talented players can take their roster spot. In fact, fighting has been on the decline since the late 1980s.
Active debates among peers can already be found on the Internet. What’s interesting is the role that fans may play in the peer review process (whether team owners want to acknowledge them or not). Fan analytics is quite competitive, and deserving of some credit for evolving what had been a stagnant approach to the progression of the game.
There is a serious effort among fans with the aptitude and will to understand it. At times, the word “effort” is a bit rosey when one considers some of the lively exchanges regarding the benefits of traditional analytics, so-called “advanced analytics,” and no analytics. Even the classification of methods is fair game.
With all the attention given to this new avenue for getting ahead, it will be interesting to see how it’s incorporated into various aspects of the sport, from the top-secret “teams within a team” to the fan experience, both at home and near the ice.
Featured Image: Bryce Durbin
Article source: http://feedproxy.google.com/~r/Techcrunch/~3/1zDvdq-l2Xs/
To some they’re the holy grail of computing; computer chips that work like the brain — opening up a wealth of possibilities from artificial intelligence to the ability to simulate whole artificial personalities.
To others they are another in a long list of interesting but unpractical ideas not too dissimilar to the ubiquitous flying car.
They’re called neuromorphic chips and the truth about them lies somewhere in the middle.
The main force driving interest in neural network technology is an inconvenient truth: the architecture that drove the information age, created by famed physicist mathematician John von Neumann (known as von Neumann architecture), is running into the fundamental limits of physics.
You may have seen these effects already: Notice how your new shiny smartphone isn’t light years ahead of your last upgrade? That’s because chips that use von Neumann architecture face a performance bottleneck and designers must now balance the functionality of the chips they design against how much power they’ll consume. Otherwise you’ll have an insanely powerful phone, with a battery that lasts all of 12 minutes.
In contrast; nature’s own computing device, the brain, is extremely powerful and the one you’re using to read this uses only 20 Watts of power. Access to this sort of powerful efficiency would enable a new era of embedded intelligence. So what’s the hold up?
Neural networks are not a new idea. Research into the technology traces its roots back to the 1940’s alongside regular computer chip technology.
Today, software emulations of neural networks are powering many of the image recognition capabilities of internet giants, like Google and Facebook, who are investing heavily in startups. However, as they need to run on expensive supercomputers their potential is always limited. They run software neural networks on a system architecture that is serial by nature as opposed to being parallel — one of the keys that makes neural networks so appealing in the first place.
So, how long will it be before we get these neural network chips?
Actually a class of neuromorphic chip has been available since 1993. That year, a small independent team approached IBM with an idea to develop a silicon neural network chip called the ZISC (Zero Instruction Set Computer), which became the world’s first commercially available neuromorphic chip.
The independent team had prior experience building software neural networks for pattern recognition at CERN’s Super Proton Synchrotron, a particle smasher and the older sibling of the more famous Large Hadron Collider of Higgs-Boson fame.
Frustrated by the inherent limitations of running neural networks on von Neumann systems they had come to the conclusion that creating neuromorphic hardware was the best way to leverage the unique capabilities of neural networks and, with IBM’s expertise, the ZISC36 with 36 artificial neurons was born. IBM alongside General Vision Inc. sold the chip and it’s successor the ZISC78 (78 neurons) for 8 years from 1993 until, in 2001, IBM exited commercial ZISC chip manufacturing and development.
General Vision decided to carry on, as they believed the technology had many unexplored applications. They leveraged their expertise to continue developing the neuromorphic technology until, after 5 years of project work; they managed to raise enough capital to build a successor to the ZISC. In 2007 they launched the Cognitive Memory 1000, aka the CM1K: A neuromorphic chip with 1,024 artificial neurons working in parallel while consuming only 0.5 Watts and able to recognize and respond to patterns in data (images/code/text/anything) in just a few microseconds. It would seem that this gamble on neuromorphic chips could pay off because soon after, neuromorphic research stepped up several gears.
In 2008, just one year after the CM1K was developed, DARPA announced the SyNAPSE program – Systems of Neuromorphic Adaptive Plastic Scalable Electronics – and awarded contracts to IBM and HRL Labs to develop neuromorphic chips from the ground up.
Another year later in 2009 a team in Europe reported developing a chip with 200,000 neurons and 50 million synaptic connections resulting from the FACETS (Fast Analog Computing with Emergent Transient States) program which in turn has lead to a new European initiative called The Human Brain Project which launched in 2014.
Numerous Universities also began (or renewed old) programs to look into neuromorphic chips and interest has begun to gain momentum.
In 2012 Intel announced they were getting involved in neuromorphic chips with a new architecture, while Qualcomm threw their hat into the ring in 2013 backed by freshly acquired neural network start ups. Most recently, in August 2014, IBM announced TrueNorth, a neuromorphic chip with 1,000,000 neurons and 256 million programmable synapses – one of the worlds most powerful and complicated chips. It’s complete with it’s own custom programming language, all of which came from DARPA’s SyNAPSE program.
TrueNorth is deeply impressive technology, a concept that could herald some of the more lofty dreams of neural networks, but sadly it is unlikely to be found in your next tablet for cost reasons. It does certainly prove a point though – we’re entering an age where neural network chips can actually work and are no longer purely in the realms of science fiction.
Why has it taken so many years for these neuromorphic technologies to take off?
In part, the success of Moore’s Law meant engineers didn’t need neural network architectures and it may surprise you to hear this, but many of the issues surrounding the slow adoption of neuromorphic chips have never been technical, the issues stem from a lack of belief. The renewed global interest in neuromorphic chips has opened people’s minds to the idea of ‘brain chips’ – which is why an 8-year-old chip design can still get people excited. A technology that can be taught (yes that’s right, not programmed, taught) to recognize just about anything from a face to a line of code and then recognize what it’s been taught anywhere in enormous volumes of data, in just a few microseconds and can be integrated with almost any modern electronics very easily.
Neuromorphic technologies have the potential to transform everything. One day maybe we will be able to download ourselves onto a neuromorphic brain chip and ‘live forever’ but long before then more practical applications beckon.
From EEG/ECG monitors that automatically recognize the warning signs in an irregular heart beat to a phone that knows the faces of the friends in a picture it’s taken and automatically sends them all a copy. Pet doors that admit only one individual pet, a car that recognizes it’s driver and automatically adjusts to their settings, robots or drones that can follow anything they’re trained to recognize a cookie jap that locks when it recognizes unauthorized hands… the list of applications is literally endless.
We can enable the things we use to ‘know’ us, creating neuromorphic things and beginning an era of true ‘smart’ technologies. Let’s be honest: In reality your smartphone isn’t ‘smart’, it’s just a well-connected tool that you use. For something to be smart: as in, ‘intelligent’, it must first have some way of recognizing the data and inputs that surround it and pattern recognition is something neuromorphic chips are very good at. You could even call it cognition.
Article source: http://feedproxy.google.com/~r/Techcrunch/~3/cIpb0Qz9MZc/
Editor’s note: Jahangir Mohammed is founder and CEO of Jasper, the cloud-based software platform that powers the Internet of Things.
Not long ago, after you bought a new vehicle, the manufacturer had very little contact with you for years until it was time to sell you another car. The Internet of Things is changing all that. The IoT-enabled “connected car” turns the vehicle itself into a hub for an entire ecosystem of connected services that offer consumers a wealth of benefits including enhanced safety and security, a richer user experience and a new suite of product offerings. From the manufacturer’s perspective, this also helps establish an ongoing customer relationship as well as incremental revenue streams over the life of the vehicle.
Across the world, 23 million cars on the road today are connected to the Internet, and that figure is expected to increase to 152 million. Historically, GM vehicles have been a hub for additional services, like its OnStar safety and security services. And in this past year, new vehicles in the U.S. are now equipped with 4G LTE mobile hotspots, with even more services promised in the future.
More vehicle manufacturers are jumping on board, as well, adding value-added connected IoT services to more new vehicles. Let’s take a look at some of these new connected car services and how IoT is providing the foundation on which this ecosystem is built.
Gone are the days when being in the car meant you had to be disconnected. Many of today’s vehicle manufacturers are introducing in-vehicle mobile hotspot capabilities, giving both driver and passengers the ability to stay connected while on the road. In addition to GM, Audi equips its vehicles with Gemalto enabled mobile hotspot services through its Audi Connect service and Ford’s new Sync 3, powered by BlackBerry’s QNX, is included in more than 30 million vehicles spanning 250 models.
Because vehicle manufacturers are leveraging IoT platforms that enable connectivity via a network of mobile operators, passengers can connect their laptops, smartphones and tablets to the car to surf the Internet, watch movies and access social media. And for drivers, they’re loyal to the manufacturers who enable them to enjoy the benefits of this new in-car experience increases.
Over-the-air (OTA) software updates in cars are very similar to the software updates that occur in smartphones. Any software update for a vehicle’s connected services is done wirelessly OTA, keeping the OEM in contact with the vehicle but removing the need for a dealership visit.
This matters because in the past year alone, over 60 million cars were recalled in the U.S., many attributed to software glitches. The cost – and burden – that recalls place on the driver and the vehicle manufacturer can be severe.
Many manufacturers are making strides to increase OTA offerings. Thanks to Gemalto, Audi provides features such as parking and gas prices using OTA updates on its Audi Connect solution, while GM has been offering OTA updates to its Bluetooth platform using the OnStar embedded connectivity platform. OTA updates are slowly becoming an industry standard – Chrysler Uconnect, Mercedes-Benz mbrace, BMW ConnectedDrive and the Toyota Entune systems regularly send firmware OTA updates to fix software glitches in their vehicles.
Sure, cars have been enabling us to connect our devices via Bluetooth to make calls and navigate – but what about the ability to use your favorite streaming apps to get the content you love without having to fiddle with your phone while driving and consuming valuable data plans? By leveraging the always-on connectivity that IoT provides, today’s connected car manufacturers are providing an entirely new interface for delivering and interacting with streaming content services like Spotify or Pandora.
IoT platforms also provide an easy way for content providers to monetize their services and optimize them for the connected car environment. Vehicle manufacturers, premium content providers, and even brick-and-mortar retailers can use IoT to give car owners a free trial of their services or other offers. OnStar’s new AtYourService feature brings commerce directly to the car, connecting retailers to drivers while they’re on the road. Online coupon services like RetailMeNot and Entertainment Book can deliver shopping coupons to the car’s dashboard for retailers located in the driver’s direction.
And IoT isn’t just about getting the service to the car – it also enables providers to track usage (or non-usage) patterns, prompt drivers to try new features, top up a diminishing data allotment or upgrade the service when the time is right. IoT has become the strategic platform to help deliver and monetize these services.
Safety and security
Safety and security remain the most important aspects of any vehicle. And with today’s connected cars, IoT enables the vehicle to continue to fulfill that role even after an accident has occurred. For example, IoT empowers pre-installed services like Volkswagen’s Car-Net with 24/7 automatic crash notification, which can automatically alert emergency services when an accident has occurred.
These possibly life-saving features are wholly dependent on reliable, low-latency connectivity, because a delay in notification (and the resulting delay in medical response) can literally mean the difference between life or death.
Another emerging value-added service enabled by IoT is usage-based insurance (UBI), which tracks a driver’s behavior to establish a personalized insurance rate plan for each driver. Whereas insurance providers were previously limited to providing rates based on general factors like age, average mileage, accident and history, IoT provides a myriad of opportunities for customization.
For example, Allstate’s Drivewise program uses a small device deployed in your vehicle to collect information about your driving style. Data points collected include speed, brake activity and the time of day you drive. By collecting this data, Allstate can customize rates for each driver, incentivizing safe driving behavior and reducing premiums. Allstate has also partnered with Airbiquity, a provider of connected vehicle services, to further develop the insurer’s Roadside Assistance program offerings that leverage driving data.
A number of startups are building off this idea of collecting driver data. Commercial plug-in modules like those from start-up Mojio connect cars to the cloud via embedded radio links and upload hundreds of data points about the car every minute, tracking everything from the brakes to the locks. It then uses these data points to provide connected car services in partnership with apps like Concur to measure gas mileage and Glympse for GPS tracking.
A new business model
IoT is turning the vehicle into a powerful hub, enabling developers, manufacturers and service providers to offer value-added services and effectively changing their relationship with customers.
Connected services strengthen the vehicle manufacturers’ relationships with their customers and enhance brand loyalty – all while developing new and ongoing revenue streams throughout the life of the vehicle. Manufacturers can monitor customers’ experiences in real-time and on the road, enhance product quality, and investigate potential post-sale opportunities. And in turn, service providers are encouraged to explore an open platform for the production of apps that bring innovation to drivers and their passengers.
Armed with the power of IoT, vehicle manufacturers are on the leading edge of evolving user experiences and new business models based on IoT services. It’s no longer about a one-time product sale – it’s about creating an ecosystem wherein they can deliver and monetize services on an ongoing basis.
The connected car industry’s successes have been a guiding light for many other markets as we see businesses everywhere embrace the evolution from a product company to an IoT services company.
Article source: http://feedproxy.google.com/~r/Techcrunch/~3/0IaKVi1d8CU/
When not opining here on TechCrunch I’m a software engineer for the fine folks at HappyFunCorp (1) and I’m occasionally called on to diagnose and fix projects that have gone horribly wrong (2). The more I do this, the more I notice commonalities among problem projects–“antipatterns,” if you will. Here I give you three more of my ongoing list of such. Names have been changed to protect the guilty.
1. Onboarding Time == Technical Debt
Technical debt is not always a bad thing, but if you accrue too much of it, it will kill you. When under schedule pressure, or when new devs keep coming onto and going off a project, people tend to build new software subsystems and connect them to the old ones Rube-Goldberg style, instead of doing it right. It’s like turning a family home into a cantilevered skyscraper one room at a time, and waiting with dread for an earthquake to hit, instead of razing it and pouring a new foundation as you should have.
But sometimes taking on short-term technical debt is the right thing to do. The real problem with technical debt is that it often lurks off the metaphorical balance sheet: it’s hard to measure, especially if you’re non-technical. What I’ve noticed of late is that there exists an excellent proxy measure for a project’s technical debt: the onboarding time for a new developer.
So ask yourself: how long does it take a new dev to get set up and start pushing good new code to the repo? In many cases the answer should be an hour or less. No, really. “git pull”, “bundle install”, “rails s” and away you go. Or “git pull”, “pod update”, open the workspace in XCode, hit Command-R, and boom. It’s been some months since I did any Android work, but Android Studio ought to be comparably easy.
But, I hear you sputtering, my environment is very complicated! We have virtual machines and multiple databases and virtual private networks! Systems talking to systems talking to systems! No way we can get a new dev set up in an hour! Uh-huh. Facebook’s pretty large and complicated too, you know … and Facebook engineers famously write real code on their first day and push it to the live site their first week. If your new developers are spending hours wrestling with configuration and environments, you have probably run up more technical debt than you care to think about.
2. The Test Suite Sweet Spot
Obviously you need to write, and run, tests for your code. And in an ideal world, you would have up-to-date tests for all of your code, which run automatically whenever you commit a change to the repo. Unfortunately, in testing as in life, the perfect is often the enemy of the good. It’s amazing how often I’ve encountered projects with elaborate test suites that have been hopelessly broken for months and/or take forever to run.
Developers write tests but don’t run them regularly, so they begin to fall out-of-date, and schedule pressure means that fixing the tests is non-critical whereas getting the new release out is critical, so the vicious circle compounds and the test suite decays into uselessness. Or–more rarely–test-driven development becomes test-centered development, and actually Getting Stuff Done takes a back seat to writing ever more and more elaborate test code, the refactoring of which takes so much time that development progress gets slower and slower.
There are costs to maintaining a huge and complex test suite; after you refactor code, you may have to either refactor your tests, which takes time, or let them break, which (ultimately) takes even more time. If your organization / management / development pressures are such that keeping all your tests up to date isn’t a valid option, and you can’t alleviate those pressures, then it’s actually better to shrink your test suite. I know that doesn’t sound appealing. I don’t like it either. But a smaller test suite you actually run and maintain is much better than a larger one you ignore.
3. Please Just Stop Running Your Own Servers Already
Big Company problem projects tend to have something else in common: they’re still running their own servers. No AWS or Google Compute Engine for them, much less Heroku or Elastic Beanstalk or App Engine. They have their own machines. And their own sysadmins. And their own homegrown processes for patching and updating and deploying code. And their own incredibly paranoid security, which often means “no developer may lay hands on the production machines,” which often makes it pretty hard to get anything done.
Big companies need to ask themselves: are our sysadmins better than Google’s or Amazon’s? And since the answer is probably no, the conclusion is: why don’t we let the experts handle it? (To be fair, they often are better than those of some cloud hosts — eg I’ve never had a good experience with Rackspace.) Yes, there are downsides. Yes, it means a loss of control. It’s not a panacea.
But unless your circumstances are truly exceptional, a full cost/benefit analysis usually points very firmly towards moving your servers to The Cloud. Back in the early days of electricity, every factory had its own generator, and many protested loudly at the loss of control inherent in this whole newfangled “power grid” notion. But there’s a reason the grid succeeded, and it’s the same reason the cloud is succeeding today. Don’t get stuck in the twentieth century. Stop making the old mistakes. After all, there are so many bold new mistakes to make.
(1) Recently profiled on video here, to my great surprise. (Nobody will believe this, but despite my TC-columnist gig, I had no idea this was happening until the week it happened.)
(2) Usually either internal projects at large clients, or startup apps built by a cheap third-party dev shop. There is some truth to the notion that you get what you pay for.
Article source: http://feedproxy.google.com/~r/Techcrunch/~3/pWiO3u3BR5Q/
Nest, the connected home company acquired by Google last year, made the news yesterday with the departure of two of its key people: founding VP of technology Yoky Matsuoka and Greg Duffy, who had come to Nest as the CEO of acquired company Dropcam. In the wake of that, we’ve been passed (anonymously) two different memos from Nest co-founders Tony Fadell and Matt Rogers, which shed some more light on what is going on.
In short, the company is reorganizing as it gears up to launch a raft of new products in 2015, starting with camera updates in two stages (or levels) that Nest is referring to as “Quartz” and “Black Quartz.” In line with this, and the departures, the company is adding more business and product management, redistributing some of the technology teams closer to product groups, and tying camera hardware more directly to Rogers’ authority.
While the memos don’t really offer any context for why Duffy or Matsuoka have left the company, we’ve seen reports elsewhere that allege culture clashes between Dropcam and Nest.
We have had another tip that points to similar problems — also anonymous — alleging that Duffy had been in conflict with Fadell before leaving. It also points to low morale after employees were told they would have to work Saturdays “until April or May (by emailed edict a few weeks ago) because we are supposed to be pulling hard for a release we don’t even believe in.”
One thing seems clear: Duffy and Matsuoka were popular and their departure surprised others at the company. “Place is in shock and disbelief,” another tipster wrote to us.
Nest declined to comment for this story. We’re embedding the two memos below, but here are the basic details:
Rogers’ note — possibly put out to the company in the wake of the news going public — confirms the departures of Matsuoka and Duffy, with Matsuoka moving to Twitter in an unspecified role and Duffy “leaving to pursue other opportunities.” Duffy later said as much himself via Twitter.
It doesn’t look like either will be replaced: the algorithms team that Matsuoka led will now be reorganised to sit closer with product groups, to “help [Nest} build even better products by getting our technology and algorithm roadmaps more tightly aligned.”
Duffy’s former direct reports Loren Kirkby and Greg Nelson will now report to Daphne Luong, “managing engineering for e-commerce, the video service and next-generation video architecture.” Meanwhile, Rogers will get a more direct grip on camera hardware, with Sam Kortz now reporting to Rogers.
To follow that up, Fadell also noted some changes in a separate memo. Tom vonReichbauer, who started out as the company’s CFO when he moved to Nest from Tesla, is taking on more responsibility as the company’s head of business, overseeing sales and business development in addition to the operational and finance roles.
And Maxime Veron, who worked for Apple for many years before joining Nest in 2011, is moving over to make room for a second product marketing lead, “given our ambitious roadmap for 2015 and beyond.” Richard Townhill will be the company’s new product marketing lead for apps and services.
Updated with response from Nest
Article source: http://feedproxy.google.com/~r/Techcrunch/~3/nGHKW89SVXY/
When you discover you've some ugly, scar like marks to the thighs, your breasts, or sometimes from the stomach, you will need realize lots of people to burn How To Get Rid Of Stretch Marks Fast these marks saving physique from creating ever again of parents. As well as one of the numerous stretchmark lotions which are usually within the market and Stretch Marks Cream you'll utilizing eradicate and quit Stretch Mark Cream new marks Stretch Marks Cream from forming.
1. Vitamin E
Vitamin E lotion will ?Stretch Marks Cream allow help your skin softer and healthier, nevertheless it planning to dispose of or avoid the ugly marks you How To Get Rid Of Stretch Marks Fast could completely. Idleness and definitely will keeping your skin good and healthy, though it is not the cure you want. Actually a thing that continues discusses great deal and some believe that it is any cure, however it won't Stretch Mark Removal be.
2. Cocoa Stretch Mark Removal Cream Butter
Another How To Get Rid Of Stretch Marks the most named stretch mark lotions since you can try is cocoa How To Get Rid Of Stretch Marks butter. Some women affirm cocoa butter Stretch Mark Removal Cream and it's used to Best Stretch Mark Cream perform per many, however, it just isn't going to do what will you expect to see it to perform. Is'nt solution for these particular ugly marks, although is great for your skin and another you may need use.
3. Real stretchmark how do you get rid of stretch marks lotions
A real lotion which would work most likely be 100% natural and will certainly outcomes may be after. Will probably be alleged to enable you to the utter discomfort Stretch Marks Removal Stretch Mark Cream you are usually after and you can now How To Get Rid Of Stretch Marks not anymore should having How To Get Rid Of Stretch Marks Fast any ugly how do you get rid of stretch marks marks while having breasts, your thighs, or perhaps your stomach. A Best Stretch Mark Cream lotion designed Stretch Mark Cream specifically for this will not only limit the current marks, and often will prevent any future marks as well as.