This post is intended for businesses and other organizations interested... Read more →
Posted by Richy George on 30 August, 2018This post was originally published on this site
The OpenStack Foundation today released the 18th version of its namesake open-source cloud infrastructure software. The project has had its ups and downs, but it remains the de facto standard for running and managing large private clouds.
What’s been interesting to watch over the years is how the project’s releases have mirrored what’s been happening in the wider world of enterprise software. The core features of the platform (compute, storage, networking) are very much in place at this point, allowing the project to look forward and to add new features that enterprises are now requesting.
The new release, dubbed Rocky, puts an emphasis on bare metal clouds, for example. While the majority of enterprises still run their workloads in virtual machines, a lot of them are now looking at containers as an alternative with less overhead and the promise of faster development cycles. Many of these enterprises want to run those containers on bare metal clouds and the project is reacting to this with its “Ironic” project that offers all of the management and automation features necessary to run these kinds of deployments.
“There’s a couple of big features that landed in Ironic in the Rocky release cycle that we think really set it up well for OpenStack bare clouds to be the foundation for both running VMs and containers,” OpenStack Foundation VP of marketing and community Lauren Sell told me.
Ironic itself isn’t new, but in today’s update, Ironic gets use-managed BIOS settings (to configure power management, for example) and RAM disk support for high-performance computing workloads. Magnum, OpenStack’s service for using container engines like Docker Swarm, Apache Mesos and Kubernetes, is now also a Kubernetes certified installer, meaning that users can be confident that OpenStack and Kubernetes work together just like a user would expect.
Another trend that’s becoming quite apparent is that many enterprises that build their own private clouds do so because they have very specific hardware needs. Often, that includes GPUs and FPGAs, for example, for machine learning workloads. To make it easier for these businesses to use OpenStack, the project now includes a lifecycle management service for these kinds of accelerators.
“Specialized hardware is getting a lot of traction right now,” OpenStack CTO Mark Collier noted. “And what’s interesting is that FPGAs have been around for a long time but people are finding out that they are really useful for certain types of AI, because they’re really good at doing the relatively simple math that you need to repeat over and over again millions of times. It’s kind of interesting to see this kind of resurgence of certain types of hardware that maybe was seen as going to be disrupted by cloud and now it’s making a roaring comeback.”
With this update, the OpenStack project is also enabling easier upgrades, something that was long a daunting process for enterprises. Because it was so hard, many chose to simply not update to the latest releases and often stayed a few releases behind. Now, the so-called Fast Forward Upgrade feature allows these users to get on new releases faster, even if they are well behind the project’s own cycle. Oath, which owns TechCrunch, runs a massive OpenStack cloud, for example, and the team recently upgraded a 20,000-core deployment from Juno (the 10th OpenStack release) to Ocata (the 15th release).
The fact that Vexxhost, a Canadian cloud provider, is already offering support for the Rocky release in its new Silicon Valley cloud today is yet another sign that updates are getting a bit easier (and the whole public cloud side of OpenStack, too, often gets overlooked, but continues to grow).
Posted by Richy George on 30 August, 2018This post was originally published on this site
InVision today announced a newly expanded integration and strategic partnership with Atlassian that will let users of Confluence, Trello and Jira see and share InVision prototypes from within those programs.
Atlassian’s product suite is built around making product teams faster and more efficient. These tools streamline and organize communication so developers and designers can focus on getting the job done. Meanwhile, InVision’s collaboration platform has caught on to the idea that design is now a team sport, letting designers, engineers, executives and other shareholders be involved in the design process right from the get-go.
Specifically, the expanded integration allows designers to share InVision Studio designs and prototypes right within Jira, Trello and Confluence . InVision Studio was unveiled late last year, offering designers an alternative to Sketch and Adobe.
Given the way design and development teams use both product suites, it only makes sense to let these product suites communicate with one another.
As part of the partnership, Atlassian has also made a strategic financial investment in InVision, though the companies declined to share the amount.
Here’s what InVision CEO Clark Valberg had to say about it in a prepared statement:
In today’s digital world creating delightful, highly effective customer experiences has become a central business imperative for every company in the world. InVision and Atlassian represent the essential platforms for organizations looking to unleash the potential of their design and development teams. We’re looking forward to all the opportunities to deepen our relationship on both a product and strategic basis, and build toward a more cohesive digital product operating system that enables every organization to build better products, faster.
InVision has been working to position itself as the Salesforce of the design world. Alongside InVision and InVision Studio, the company has also built out an asset and app store, as well as launched a small fund to invest in design startups. In short, InVision wants the design ecosystem to revolve around it.
Considering that InVision has raised more than $200 million, and serves 4 million users, including 80 percent of the Fortune 500, it would seem that the strategy is paying off.
Posted by Richy George on 29 August, 2018This post was originally published on this site
Google today announced that it is providing the Cloud Native Computing Foundation (CNCF) with $9 million in Google Cloud credits to help further its work on the Kubernetes container orchestrator and that it is handing over operational control of the project to the community. These credits will be split over three years and are meant to cover the infrastructure costs of building, testing and distributing the Kubernetes software.
Why does this matter? Until now, Google hosted virtually all the cloud resources that supported the project like its CI/CD testing infrastructure, container downloads and DNS services on its cloud. But Google is now taking a step back. With the Kubernetes community reaching a state of maturity, Google is transferring all of this to the community.
Between the testing infrastructure and hosting container downloads, the Kubernetes project regularly runs over 150,000 containers on 5,000 virtual machines, so the cost of running these systems quickly adds up. The Kubernetes container registry served almost 130 million downloads since the launch of the project.
It’s also worth noting that the CNCF now includes a wide range of members that typically compete with each other. We’re talking Alibaba Cloud, AWS, Microsoft Azure, Google Cloud, IBM Cloud, Oracle, SAP and VMware, for example. All of these profit from the work of the CNCF and the Kubernetes community. Google doesn’t say so outright, but it’s fair to assume that it wanted others to shoulder some of the burdens of running the Kubernetes infrastructure, too. Similarly, some of the members of the community surely didn’t want to be so closely tied to Google’s infrastructure either.
“By sharing the operational responsibilities for Kubernetes with contributors to the project, we look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project operations,” Google Kubernetes Engine product manager William Deniss writes in today’s announcement. He also notes that a number of Google’s will still be involved in running the Kubernetes infrastructure.
“Google’s significant financial donation to the Kubernetes community will help ensure that the project’s constant pace of innovation and broad adoption continue unabated,” said Dan Kohn, the executive director of the CNCF. “We’re thrilled to see Google Cloud transfer management of the Kubernetes testing and infrastructure projects into contributors’ hands – making the project not just open source, but openly managed, by an open community.”
It’s unclear whether the project plans to take some of the Google-hosted infrastructure and move it to another cloud, but it could definitely do so — and other cloud providers could step up and offer similar credits, too.
Posted by Richy George on 29 August, 2018This post was originally published on this site
The interconnectedness of the cloud has allowed us to share content widely with people inside and outside the organization and across different applications, but that ability has created a problem of its own, a kind of digital fragmentation. How do you track how that piece of content is being used across a range of cloud services? It’s a problem Box wants to solve with its latest features, Activity Stream and Recommended Apps.
The company made the announcements at BoxWorks, its annual customer conference being held this week in San Francisco,
Activity Stream provides a way to track your content in real time as it moves through the organization, including who touches it and what applications it’s used in, acting as a kind of digital audit trail. One of the big problems with content in the cloud age is understanding what happened to it after you created it. Did it get used in Salesforce or ServiceNow or Slack? You can now follow the path of your content and see how people have shared it, and this could help remove some of the disconnect people feel in the digital world.
As Jeetu Patel, Box’s Chief Product and Chief Strategy Officer points out, an average large company could have more than a thousand apps and there is no good way to connect the dots when it comes to tracking unstructured content and getting a unified view of the digital trail.
“We integrate with over 1400 applications, and as we integrate with those applications, we thought if we could surface those events, it would be insanely useful to our users,” he said. Patel sees this as the beginning of an important construct, the notion of a content hub where you can see the entire transaction record associated with a piece of content.
But Box didn’t want to stop with just a laundry list of the connections. It also created deep links into the applications being used, so a user can click a link, open the application and view the content in the context of that other application. “It seems like Box was a logical place to get a bird’s eye view of how content is being used,” Patel said, explaining Box’s thinking in creating this feature.
A related feature is a list of Recommended Apps. Based the Box Graph, and what Box knows about the user, the content they use, and how it’s interconnected with other cloud apps, it also displays a list of recommended apps right in the Box interface. This lets users access those applications in the context of their work, so for instance, they could share the content in Slack right from the document.
For starters, Recommended Apps integrations include G Suite apps, Slack, Salesforce, DocuSign and Netsuite, but Patel says anyone who is integrated with the web app via the API will start showing up in Activity Stream.
While the products were announced today, Box is still working out the kinks in terms of how this will work. They expect these features to be available early next year. If they can pull this off, it will go a long way toward solving the digital fragmentation problem and making Box the content center for organizations.
Posted by Richy George on 29 August, 2018This post was originally published on this site
Cloudian, a company that specializes in helping businesses store petabytes of data, today announced that it has raised a $94 million Series E funding round. Investors in this round, which is one of the largest we have seen for a storage vendor, include Digital Alpha, Fidelity Eight Roads, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures and WS Investments. This round includes a $25 million investment from Digital Alpha, which was first announced earlier this year.
With this, the seven-year-old company has now raised a total of $174 million.
As the company told me, it now has about 160 employees and 240 enterprise customers. Cloudian has found its sweet spot in managing the large video archives of entertainment companies, but its customers also include healthcare companies, automobile manufacturers and Formula One teams.
What’s important to stress here is that Cloudian’s focus is on on-premise storage, not cloud storage, though it does offer support for multi-cloud data management, as well. “Data tends to be most effectively used close to where it is created and close to where it’s being used,” Cloudian VP of worldwide sales Jon Ash told me. “That’s because of latency, because of network traffic. You can almost always get better performance, better control over your data if it is being stored close to where it’s being used.” He also noted that it’s often costly and complex to move that data elsewhere, especially when you’re talking about the large amounts of information that Cloudian’s customers need to manage.
Unsurprisingly, companies that have this much data now want to use it for machine learning, too, so Cloudian is starting to get into this space, as well. As Cloudian CEO and co-founder Michael Tso also told me, companies are now aware that the data they pull in, no matter whether that’s from IoT sensors, cameras or medical imaging devices, will only become more valuable over time as they try to train their models. If they decide to throw the data away, they run the risk of having nothing with which to train their models.
Cloudian plans to use the new funding to expand its global sales and marketing efforts and increase its engineering team. “We have to invest in engineering and our core technology, as well,” Tso noted. “We have to innovate in new areas like AI.”
As Ash also stressed, Cloudian’s business is really data management — not just storage. “Data is coming from everywhere and it’s going everywhere,” he said. “The old-school storage platforms that were siloed just don’t work anywhere.”
Posted by Richy George on 28 August, 2018This post was originally published on this site
Microsoft today announced a couple of AI-centric updates for OneDrive and SharePoint users with an Office 365 subscription that bring more of the company’s machine learning smarts to its file storage services.
All of these features will launch at some point later this year. With the company’s Ignite conference in Orlando coming up next month, it’s probably a fair guess that we’ll see some of these updates make a reappearance there.
The highlight of these announcements is that starting later this year, both services will get automated transcription services for video and audio files. While video is great, it’s virtually impossible to find any information in these files without spending a lot of time. And once you’ve found it, you still have to transcribe it. Microsoft says this new service will handle the transcription automatically and then display the transcript as you’re watching the video. The service can handle over 320 file types, so chances are it’ll work with your files, too.
Other updates the company today announced include a new file view for OneDrive and Office.com that will recommend files to you by looking at what you’ve been working on lately across the Microsoft 365 and making an educated guess as to what you’ll likely want to work on now. Microsoft will also soon use a similar set of algorithms to prompt you to share files with your colleagues after you’ve just presented them in a meeting with PowerPoint, for example.
Power users will also soon see access statistics for any file in OneDrive and SharePoint.
Posted by Richy George on 28 August, 2018This post was originally published on this site
“You can’t hack what isn’t there,” Very Good Security co-founder Mahmoud Abdelkader tells me. His startup assumes the liability of storing sensitive data for other companies, substituting dummy credit card or Social Security numbers for the real ones. Then when the data needs to be moved or operated on, VGS injects the original info without clients having to change their code.
It’s essentially a data bank that allows businesses to stop storing confidential info under their unsecured mattress. Or you could think of it as Amazon Web Services for data instead of servers. Given all the high-profile breaches of late, it’s clear that many companies can’t be trusted to house sensitive data. Andreessen Horowitz is betting that they’d rather leave it to an expert.
That’s why the famous venture firm is leading an $8.5 million Series A for VGS, and its partner Alex Rampell is joining the board. The round also includes NYCA, Vertex Ventures, Slow Ventures and PayPal mafioso Max Levchin. The cash builds on VGS’ $1.4 million seed round, and will pay for its first big marketing initiative and more salespeople.
“Hey! Stop doing this yourself!,” Abdelkader asserts. “Put it on VGS and we’ll let you operate on your data as if you possess it with none of the liability.” While no data is ever 100 percent unhackable, putting it in VGS’ meticulously secured vaults means clients don’t have to become security geniuses themselves and instead can focus on what’s unique to their business.
“Privacy is a part of the UN Declaration of Human Rights. We should be able to build innovative applications without sacrificing our privacy and security,” says Abdelkader. He got his start in the industry by reverse-engineering games like StarCraft to build cheats and trainer software. But after studying discrete mathematics, cryptology and number theory, he craved a headier challenge.
Abdelkader co-founded Y Combinator-backed payment system Balanced in 2010, which also raised cash from Andreessen. But out-muscled by Stripe, Balanced shut down in 2015. While transitioning customers over to fellow YC alumni Stripe, Balanced received interest from other companies wanting it to store their data so they could be PCI-compliant.
Now Abdelkader and his VP from Balanced, Marshall Jones, have returned with VGS to sell that as a service. It’s targeting startups that handle data like payment card information, Social Security numbers and medical info, though eventually it could invade the larger enterprise market. It can quickly help these clients achieve compliance certifications for PCI, SOC2, EI3PA, HIPAA and other standards.
VGS’ innovation comes in replacing this data with “format preserving aliases” that are privacy safe. “Your app code doesn’t know the difference between this and actually sensitive data,” Abdelkader explains. In 30 minutes of integration, apps can be reworked to route traffic through VGS without ever talking to a salesperson. VGS locks up the real strings and sends the aliases to you instead, then intercepts those aliases and swaps them with the originals when necessary.
“We don’t actually see your data that you vault on VGS,” Abdelkader tells me. “It’s basically modeled after prison. The valuables are stored in isolation.” That means a business’ differentiator is their business logic, not the way they store data.
For example, fintech startup LendUp works with VGS to issue virtual credit card numbers that are replaced with fake numbers in LendUp’s databases. That way if it’s hacked, users’ don’t get their cards stolen. But when those card numbers are sent to a processor to actually make a payment, the real card numbers are subbed in last-minute.
VGS charges per data record and operation, with the first 500 records and 100,000 sensitive API calls free; $20 a month gets clients double that, and then they pay 4 cent per record and 2 cents per operation. VGS provides access to insurance too, working with a variety of underwriters. It starts with $1 million policies that can be much larger for Fortune 500s and other big companies, which might want $20 million per incident.
Obviously, VGS has to be obsessive about its own security. A breach of its vaults could kill its brand. “I don’t sleep. I worry I’ll miss something. Are we a giant honey pot?,” Abdelkader wonders. “We’ve invested a significant amount of our money into 24/7 monitoring for intrusions.”
Beyond the threat of hackers, VGS also has to battle with others picking away at part of its stack or trying to compete with the whole, like TokenEx, HP’s Voltage, Thales’ Vormetric, Oracle and more. But it’s do-it-yourself security that’s the status quo and what VGS is really trying to disrupt.
But VGS has a big accruing advantage. Each time it works with a clients’ partners like Experian or TransUnion for a company working with credit checks, it already has a relationship with them the next time another clients has to connect with these partners. Abdelkader hopes that, “Effectively, we become a standard of data security and privacy. All the institutions will just say ‘why don’t you use VGS?’”
That standard only works if it’s constantly evolving to win the cat-and-mouse game versus attackers. While a company is worrying about the particular value it adds to the world, these intelligent human adversaries can find a weak link in their security — costing them a fortune and ruining their relationships. “I’m selling trust,” Abdelkader concludes. That peace of mind is often worth the price.
Posted by Richy George on 27 August, 2018This post was originally published on this site
VMware is hosting its VMworld customer conference in Las Vegas this week, and to get things going it announced that its acquiring Boston-based CloudHealth Technologies. They did not disclose the terms of the deal, but Reuters is reporting the price is $500 million.
CloudHealth provides VMware with a crucial multi-cloud management platform that works across AWS, Microsoft Azure and Google Cloud Platform, giving customers a way to manage cloud cost, usage, security and performance from a single interface.
Although AWS leads the cloud market by a large margin, it is a vast and growing market and most companies are not putting their eggs in a single vendor basket. Instead, they are looking at best of breed options for different cloud services.
This multi-cloud approach is great for customers in that they are not tied down to any single provider, but it does create a management headache as a consequence. CloudHealth gives multi-cloud users a way to manage their environment from a single tool.
VMware’s chief operating officer for products and cloud services, Raghu Raghuram, says CloudHealth solves the multi-cloud operational dilemma. “With the addition of CloudHealth Technologies we are delivering a consistent and actionable view into cost and resource management, security and performance for applications across multiple clouds,” Raghuram said in a statement.
CloudHealth began offering support for Google Cloud Platform just last month. CTO Joe Kinsella told TechCrunch why they had decided to expand their platform to include GCP support: “I think a lot of the initiatives that have been driven since Diane Greene joined Google [at the end of 2015] and began really driving towards the enterprise are bearing fruit. And as a result, we’re starting to see a really substantial uptick in interest.”
It also gave them a complete solution for managing across the three of the biggest cloud vendors. That last piece very likely made them an even more attractive target for a company like VMware, who apparently was looking for a solution to buy that would help customers manage across a hybrid and multi-cloud environment.
The company had been planning future expansion to manage not just the public cloud, but also private clouds and data centers from one place, a strategy that should fit well with what VMware has been trying to do in recent years to help companies manage a hybrid environment, regardless of where their virtual machines live.
With CloudHealth, VMware not only gets the multi-cloud management solution, it gains its 3000 customers which include Yelp, Dow Jones, Zendesk and Pinterest.
CloudHealth was founded in 2012 and has raised over $87 million. Its most recent round was a $46 million Series D in June 2017 led by Kleiner Perkins. Other lead investors across earlier rounds have included Sapphire Ventures, Scale Venture Partners and .406 Ventures.
Posted by Richy George on 26 August, 2018This post was originally published on this site
In tech circles, it would be easy to assume that the world of high-impact charitable giving is a rich man’s game where deals are inked at exclusive black tie galas over fancy hors d’oeuvre. Both Mark Zuckerberg and Marc Benioff have donated to SF hospitals that now bear their names. Gordon Moore has given away $5B – including $600M to Caltech – which was the largest donation to a university at the time. And of course, Bill Gates has already donated $27B to every cause imaginable (and co-founded The Giving Pledge, a consortium of billionaires pledging to donate most of their net worth to charity by the end of their lifetime.)
For Bill, that means he has about $90B left to give.
For the average working American, this world of concierge giving is out of reach, both in check size, and the army of consultants, lawyers and PR strategists that come with it. It seems that in order to do good, you must first do well. Very well.
Bright Funds is looking to change that. Founded in 2012, this SF-based startup is looking to democratize concierge giving to every individual so they “can give with the same effectiveness as Bill and Melinda Gates.” They are doing to philanthropy what Vanguard and Wealthfront have done for asset management for retail investors.
In particular, they are looking to unlock dollars from the underutilized corporate benefit of matching funds for donations, which according to Bright Funds is offered by over 60% of medium to large enterprises, but only used by 13% of employees at these companies. The need for such a service is clear — these programs are cumbersome, transactional, and often offline. Make a donation, submit a receipt, and wait for it to churn through the bureaucratic machine of accounting and finance before matching funds show up weeks later.
Bright Funds is looking to make your company’s matching funds benefit as accessible and important to you as your free lunches or massages. Plus, Bright Funds charges companies per seat, along with a transaction fee to cover the cost of payment processing, sparing employees any expense.
It’s a model that is working. According to Bright Fund’s CEO Ty Walrod, Bright Funds customers see on average a 40% year-over-year increase in funds donated through the platform. More importantly, Bright Funds not only transforms an employee’s relationship to personal philanthropy, but also to the company they work for.
This model of bottoms-up giving is a welcome change from the big foundation model which has recently been rocked by scandal. The Silicon Valley Community Foundation was the go-to foundation for The Who’s Who of Silicon Valley elite. It rode the latest tech boom to become the largest community foundation in eleven short years with generous stock donations from donors like Mark Zuckerberg ($1.8 billion), GoPro’s Nicholas Woodman ($500 million), and WhatsApp co-founder Jan Koum ($566 million). Today, at $13.5 billion, it surpasses the 80+ year old Ford Foundation in endowment size.
However, earlier this year, their star fundraiser Mari Ellen Loijens (credited with raising $8.3B of the $13.5B) was accused of repeatedly bullying and sexually harassing coworkers, allegations that the Foundation had “known about for years” but failed to act upon. In 2017, a similar case occurred when USC’s star fundraiser David Carrera stepped down on charges of sexual harassment after leading the university’s historic $6 billion fundraising campaign.
While large foundations and endowments do important work, their structure relies too much on whale hunting for big checks, giving an inordinate amount of power to the hands of a small group of talented fund raisers.
This stands in contrast to Bright Funds’ ethos — to lead a grassroots movement in empowering individual employees to make their dollar of giving count.
Bright Funds is the latest iteration of a lineup of workplace giving platforms. MicroEdge and Cybergrants paved the way in the 80s and 90s by digitizing the giving experience, but was mainly on-premise, and lacked a focus on user experience. Benevity and YourCause arrived in 2007 to bring workplace giving to the cloud, but they were still not turnkey solutions that could be easily implemented.
Bright Funds started as a consumer platform, and has retained that heritage in its approach to product design, aiming to reduce friction for both employee and company adoption. This is why many of their first customers were midsized tech startups with limited resources and looking for a turnkey solution, including Eventbrite, Box, Github, and Contently . They are now finding their way upmarket into larger, more established enterprises like Cisco, VMWare, Campbell’s Soup Company, and Sunpower.
Bright Funds approach to product has brought a number of innovations to this space.
The first is the concept of a cause-focused “fund.” Similar to a mutual fund or ETF, these funds are portfolios of nonprofits curated by subject-matter experts tailored to a specific cause area (e.g. conservation, education, poverty, etc.). This solves one of the chief concerns of any donor — is my dollar being put to good use towards the causes I care about? Passionate about conservation? Invest with Jim Leape from the Stanford Woods Institute for the Environment, who brings over three decades of conservation experience in choosing the six nonprofits in Bright Fund’s conservation portfolio. This same expertise is available across a number of cause areas.
Additionally, funds can also be created by companies or employees. This has proven to be an important rallying point for emergency relief during natural disasters, where employees at companies can collectively assemble a list of nonprofits to donate to. In 2017, Cisco employees donated $1.8 million (including company matching) through Bright Funds to Hurricanes Harvey, Maria, and Irma as well as the central Mexico earthquakes, the current flooding in India and many more.
The second key feature of their product is the impact timeline, a central news feed to understand where your dollars are going across all your cause areas. This transforms giving from a black box transaction to an ongoing dialogue between you and your charities.
Lastly, Bright Funds wants to take away all the administrative burden that might come with giving and volunteering — everything from tracking your volunteer opportunities and hours, to one-click tax reporting across all your charitable donations. In short, no more shoeboxes of receipts to process through in April.
Although Bright Funds is focused on transforming the individual giving experience, it’s paying customer at the end of the day is the enterprise.
And although it is philanthropic in nature, Bright Funds is not exempt from the procurement gauntlet that every enterprise software startup faces — what’s in it for the customer? What impact does workplace giving and volunteering have on culture and the bottom line?
To this end, there is evidence to show that corporate social responsibility has a an impact on recruiting the next generation of workers. A study by Horizon Media found that 81% of millennials expect their companies to be good corporate citizens. A separate 2015 study found that 62% of millennials said they’d take a pay cut to work for a company that’s socially responsible.
Box, one of Bright Fund’s early customers, has seen this impact on recruiting firsthand (disclosure: Box is one of my former employers). Like most tech companies competing for talent in the Valley, Box used to give out lucrative bonuses for candidate referrals. They recently switched to giving out $500 in Bright Funds gift credit. Instead of seeing employee referrals dip, Box saw referrals “skyrocket,” according to Box.org Executive Director Bryan Breckenridge. This program has now become “one of the most cherished cultural traditions at Box,” he said.
Additionally, like any corporate benefit, there should be metrics tied to employee retention. Benevity released a study of 2 million employees across 118 companies on their platform that showed a 57% reduction in turnover for employees engaged in corporate giving or volunteering efforts. VMware, one of Bright Fund’s customers, has seen an astonishing 82% of their 22,000 employees participate in their Citizen Philanthropy program of giving and volunteering, according to VMware Foundation Director Jessa Chin. Their full-time voluntary turnover rate (8%) is well below the software industry average of 13.2%.
Bright Funds still has a lot of work to do. CEO Walrod says that one of his top priorities is to expand the platform beyond US charities, finding ways to evaluate and incorporate international nonprofits.
They have also not given up their dream of becoming a truly consumer platform, perhaps one day competing in the world of donor-advised funds, which today is largely dominated by big names like Fidelity and Schwab who house over $85B of assets. In the short term, Walrod wants to make every Bright Funds account similar to a 401K account. It goes wherever you work, and is a lasting record of the causes you care about, and the time and resources you’ve invested in them.
Whether the impetus is altruism around giving or something more utilitarian like retention, companies are increasingly realizing that their employees represent a charitable force that can be harnessed for the greater good. Bright Funds has more work to do like any startup, but it is empowering the next set of donors who can give with the same effectiveness as Gates, and one day, at the same scale as him as well.
Posted by Richy George on 25 August, 2018This post was originally published on this site
Amazon has a big target on its back these days, and because of its size, scope and impact on local business, critics are right to look closely at tax breaks and other subsidies they receive. There is nothing wrong with digging into these breaks to see if they reach the goals governments set in terms of net new jobs. But Amazon isn’t alone here by any means. Many states have a big tech subsidy story to tell, and it isn’t always a tale that ends well for the subsidizing government.
In fact, a recent study by the watchdog group, Good Jobs First, found states are willing to throw millions at high tech companies to lure them into building in their communities. They cited three examples in the report including Tesla’s $1.25 billion 20-year deal to build a battery factory in Nevada, Foxconn’s $3 billion break to build a display factory in Wisconsin and the Apple data center deal in Iowa, which resulted in a $214 million tax break.
Good Jobs First executive director Greg LeRoy doesn’t think these subsidies are justifiable and they take away business development dollars from smaller businesses that tend to build more sustainable jobs in a community.
“The “lots of eggs in one basket” strategy is especially ill-suited. But many public leaders haven’t switched gears yet, often putting taxpayers at great risk, especially because some tech companies have become very aggressive about demanding big tax breaks. Companies with famous names are even more irresistible to politicians who want to look active on jobs,” LeRoy and his colleague Maryann Feldman wrote in a Guardian commentary last month.
While these deals are designed to attract the company to an area and generate jobs, that doesn’t always happen. The Apple-Iowa deal, for example, involved 550 construction jobs to build the $1.3 billion state-of-the-art facility, but will ultimately generate only 50 full-time jobs. It’s worth noting that in this case, Apple further sweetened the pot by contributing “up to $100 million” to a local public improvement fund, according to information supplied by the company.
One thing many lay people don’t realize, however, is that in spite of the size, cost and amount of real estate of these mega data centers, they are highly automated and don’t require a whole lot of people to run. While Apple is giving back to the community around the data center, in the end, if the goal of the subsidy is permanent high-paying jobs, there aren’t very many involved in running a data center.
It’s not hard to find projects that didn’t work out. A $2 million tax subsidy deal between Massachusetts and Nortel Networks in 2008 to keep 2200 jobs in place and add 800 more failed miserably. By 2010 there were just 145 jobs left at the facility and the tax incentive lasted another 4 years, according to a Boston.com report.
More recent deals come at a much higher price. The $3 billion Foxconn deal in Wisconsin was expected to generate 3000 direct jobs (and another 22,000 related ones). That comes out to an estimated cost of between $15,000 and $19,000 per job annually, much higher than the typical cost of $2457 per job, according to data in the New York Times.
Meanwhile states are falling all over themselves with billions in subsidies to give Amazon whatever its little heart desires to build HQ2, which could generate up to 50,000 jobs over a decade if all goes according to plan. The question, as with the Foxconn deal, is whether the states can truly justify the cost per job and the impact on infrastructure and housing to make it worth it?
What’s more, how do you ensure that you get a least a modest return on that investment? In the case of the Nortel example in Massachusetts, shouldn’t the Commonwealth have protected itself against a catastrophic failure instead of continuing to give the tax break for years after it was clear Nortel wasn’t able to live up to its side of the agreement?
Not every deal needs to be a home run, but you want to at least ensure you get a decent number of net new jobs out of it, and that there is some fairness in the end, regardless of the outcome. States also need to figure out the impact of any subsidy on other economic development plans, and not simply fall for name recognition over common sense.
These are questions every state needs to be considering as they pour money into these companies. It’s understandable in post-industrial America, where many factory jobs have been automated away that states want to lure high-paying high tech jobs to their communities, but it’s still incumbent upon officials to make sure they are doing due diligence on the total impact of the deal to be certain the cost is justified in the end.
Copyright 2015 - InnovatePC - All Rights Reserved
Site Design By Digital web avenue