The body of Vanessa Pierre was found early Friday morning along an expressway in Queens.
A Long Island man has been charged with second-degree murder for allegedly strangling his pregnant girlfriend and dumping her body near an expressway in Queens.
An MTA bus driver discovered the body of Vanessa Pierre, 29, early Friday morning along Horace Harding Expressway in Bayside. She was wearing pajamas and gray sweatpants were wrapped around her neck, PEOPLE reports. Her family said she was six-months pregnant and her unborn baby girl could not be saved.
On October 23rd in the confines of the 111th precinct, 29-year-old Vanessa Pierre and her unborn baby were found deceased laying facedown on the sidewalk off of Horace Harding Expressway. pic.twitter.com/OiTW0AMzud
Investigators say 29-year-old Goey Charles strangled Pierre and the heinous crime was caught on surveillance video (see Twitter clip above).
A statement from the Queens District Attorney’s Office alleges that the footage shows Charles pulling over on the expressway just before 3 a.m. and attacking Pierre, whom he’d been dating for more than a year.
“Charles exited the driver’s seat of a 2019 white Dodge Challenger, which is registered in the name of the victim, Vanessa Pierre,” the statement reads. “He moved to the backseat where Ms. Pierre could be seen moving on the video footage. And then, soon afterwards, all movement stopped and the victim appeared to lay across the backseat motionless.”
Over an hour later, “the defendant is observed exiting the vehicle and then allegedly dragging the pregnant woman out of the car and dropping her body onto the sidewalk.”
Charles is accused of leaving his girlfriend’s “dying body on the side of the roadway,” and fleeing the scene in his car.
There were several candle light vigils in Bayside for a 29 year old health care worker who was 6 months pregnant, strangled, dumped side of the road. There was also a balloon announcing Vanessa Pierre was carrying a baby girl. Her boyfriend has been charged w murder@abc7nypic.twitter.com/PJoJUfwOv3
By the time first responders arrived, Pierre was dead, according to the report.
“This is a heartbreaking case,” Queens District Attorney Melinda Katz said. “A pregnant woman was allegedly killed by this defendant, the father of her unborn child. Her family is devastated. The defendant is in custody and will answer for his alleged actions.”
If convicted on the charge of second-degree murder, Charles faces up to 25 years to life in prison.
Pierre’s family has launched a GoFundMe to cover burial expenses.
Have you subscribed totheGrio’s podcast“Dear Culture”? Download our newest episodes now!
Allen’s empire continues to grow as he expands his media company’s portfolio of TV offerings
Byron Allen‘s Allen Media Group has purchased two over-the-air broadcast TV networks, This TV and Light TV, in an acquisition from MGM.
The networks broadcast 24 hours a day and are available via over-the-air, cable and online for free nationwide. Both This TV and Light TV program a variety of MGM’s film and television content.
This TV, launched in 2008, offers films and other limited general entertainment content in the form of classic television series, while Light TV, which was founded in 2016, is a free broadcast network featuring family-friendly movies and television series.
Allen, who is also the owner of theGrio, has now expanded his media empire to a total of ten networks, including The Weather Channel, Cars.TV, Comedy.TV, ES.TV, JusticeCentral.TV, MyDestination.TV, Pets.TV and Recipe.TV.
“I am happy to announce that Allen Media Group has achieved another critical milestone by successfully acquiring two over-the-air broadcast television networks This TV and Light TV from MGM,” said Allen, Founder/Chairman/CEO of Entertainment Studios and Allen Media Group.
Byron Allen attends Tay Da Prince “Love One Another” music video shoot featuring John Legend at Smashbox Studios on November 20, 2019 in Culver City, California. (Photo by JC Olivera/Getty Images)
“We are going to continue to invest a substantial amount of capital into the programming, marketing, and distribution of these networks. We are strong believers in broadcasting and free-streaming direct-to-consumer platforms.”
Chris Ottinger, MGM’s president of World Wide Television Distribution & Acquisitions said, “Byron Allen is an innovator and has built a tremendous portfolio of networks within his organization. With a reach of over 81 million households, these networks will be great additions to his Allen Media Group.”
Have you subscribed to theGrio’s podcast “Dear Culture”? Download our newest episodes now!
A woman who supports Trump is caught on film yelling the n word.
With only days leading up to the next presidential election, Donald Trump’s supporters are waving their flags proudly.
A video recently went viral on Twitter of a woman holding a sign that says “Trump Pence Make America Great Again 2020.” But it wasn’t the sign that alarmed bystanders. It was what she was saying.
The video is shot with an eerie tune in the background as she walked oddly down the street in a screechy voice saying “peace and love” in reference to the sign held by the woman holding the camera.
She approached a man who asked her to keep away. “Don’t worry, I’ll keep away from your n***** ass,” she replied to the man wearing a mask.
She then turned her attention to the woman shooting the video, who identified herself as Trinity Patrick. Patrick asked the dancing woman her name and she simply mimicked her. Patrick addressed other folks nearby holding Trump signs and says, “I know this is a bad representation for your side.”
The dancing woman shot back with, “It’s a bad representation for your n***** ass.”
It’s is unclear how old the video is or where it was shot but the video took a sharp turn when the woman made a shocking confession. Patrick told the woman she loved her energy.
“I’ve got plenty of it, baby, I’m 74, married 55 years, just killed my husband in June.”
Patrick responded with, “You killed your husband in June? I believe it.”
Trump supporters are gearing up for the November 3rd election and are being very vocal about why Trump deserves their vote.
NPR recently spoke with Reymundo Torres of Arizona who is a loyal Trump supporter. The man identifies himself as Mexican and says he supports Trump because he fulfills his promises.
“The thing that initially attracted me and keeps me tied to him is that he has taught Republicans how to not just win, but no longer throw our faces and bodies in front of every punch that the left is willing to throw,” Torres said.
Have you subscribed to theGrio’s podcast “Dear Culture”? Download our newest episodes now!
Mookie Betts had a mad dash to home plate in the sixth inning to put Los Angeles over the top
ARLINGTON, Texas (AP) — No dogpile, no champagne and a mask on nearly every face — the Los Angeles Dodgers celebrated their first World Series title since 1988 in a manner no one could have imagined prior to the coronavirus pandemic.
They did it without Justin Turner, their red-headed star who received a positive test for COVID-19 in the middle of their clinching victory.
Turner was removed from Los Angeles’ 3-1 victory over the Tampa Bay Rays in Game 6 on Tuesday night after registering Major League Baseball’s first positive test in 59 days and wasn’t on the field as the Dodgers enjoyed the spoils of a title earned during a most unusual season.
“Thanks to everyone reaching out!,” Turner said on Twitter. “I feel great, no symptoms at all. Just experienced every emotion you can possibly imagine. Can’t believe I couldn’t be out there to celebrate with my guys! So proud of this team & unbelievably happy for the City of LA.”
Commissioner Rob Manfred confirmed Turner’s positive test moments after presenting the World Series trophy to Los Angeles — a jarring reminder of all that’s been different in this season where the perennially favored Dodgers finally broke through.
Los Angeles Dodgers’ Mookie Betts celebrates after a home run against the Tampa Bay Rays during the eighth inning in Game 6 of the baseball World Series Tuesday, Oct. 27, 2020, in Arlington, Texas. (AP Photo/Sue Ogrocki)
Mookie Betts, who came to the Dodgers to make a World Series difference, had a mad dash to home plate in the sixth inning to put Los Angeles over the top.
The end of a frustrating championship drought for LA — and perhaps just the start for Betts and the Dodgers, whose seventh World Series title was their sixth since leaving Brooklyn to the West Coast in 1958.
Betts bolted from third for the go-ahead run on World Series MVP Corey Seager’s infield grounder, then led off the eighth with a punctuating homer.
“I just came to be a part of it. I’m just happy I could contribute,” Betts said
Clayton Kershaw was warming in the bullpen when Julio Urias struck out Willy Adames to end it and ran alongside teammates to celebrate in the infield, later joined by family who had been in the bubble with them in North Texas. Players were handed face masks as they gathered, although many of their embraces came mask-free even after Turner’s positive test.
The Dodgers had played 5,014 regular season games and were in their 114th postseason game since Orel Hershiser struck out Oakland’s Tony Phillips for the final out of the World Series in 1988, the same year Kershaw — the three-time NL Cy Young Award winner who won Games 1 and 5 of this Series — was born in nearby Dallas.
Los Angeles had come up short in the World Series twice in the previous three years. Betts was on the other side two years ago and homered in the clinching Game 5 for the Boston Red Sox, who before this season traded the 2018 AL MVP to the Dodgers. They later gave him a $365 million, 12-year extension that goes until he turns 40 in 2032.
Los Angeles Dodgers third baseman Justin Turner, left, and second baseman Enrique Hernandez warm up during batting practice before Game 5 of the baseball World Series against the Tampa Bay Rays Sunday, Oct. 25, 2020, in Arlington, Texas. (AP Photo/Eric Gay)
Betts’ 3.2-second sprint was just enough to beat the throw by first baseman Ji-Man Choi, pushing Los Angeles ahead 2-1 moments after Rays manager Kevin Cash pulled ace left-hander Blake Snell despite a dominant performance over 5 1/3 innings.
“I’m not exactly sure why,” Betts said when asked about the move. “I’m not going to ask any questions. He was pitching a great game.”
Randy Arozarena, the powerful Tampa Bay rookie, extended his postseason record with his 10th homer in the first off rookie right-hander Tony Gonsolin, the first of seven Dodgers pitchers. The Rays never got another runner past second base as LA’s bullpen gave reliever-reliant Tampa Bay a taste of its own medicine.
About 2 1/2 weeks after the Lakers won the NBA title while finishing their season in the NBA bubble in Orlando, Florida, the Dodgers gave Los Angeles another championship in this year when the novel coronavirus pandemic has delayed, shortened and moved around sports seasons.
Seager, also the NLCS MVP, set Dodgers records with eight homers and 20 RBIs this postseason.
The MLB season didn’t start until late July and was abbreviated to 60 games for the shortest regular season since 1878. And the expanded postseason, with 16 teams making it instead of 10, almost went the full distance.
It ended when Urias got the last two out Tampa Bay batters on called third strikes — the 15th and 16 Ks by the Rays, with catcher Austin Barnes stuffing the last pitch in his back pocket. Along with the 11 strikeouts by the Dodgers, it was the most combined strikeouts in a nine-inning World Series game.
Have you subscribed totheGrio’s new podcast“Dear Culture”? Download our newest episodes now!
One of the accounts was connected to hackers that sent threatening emails to Americans last week.
Facebook has removed 3 small networks for using fake accounts to spread misinformation about the presidential election.
The first network had two pages and 22 Instagram accounts, which were removed for foreign interference, per Ad Week. The second network consisted of 12 accounts, six pages and 11 Instagram accounts — all created for government interference.
Two of the networks targeted voters in the United States.
One of the accounts was based in Iran and connected to hackers that sent threatening emails to Americans last week.
theGRIO previously reported, FBI Director Christopher Wray and National Intelligence Director John Ratcliffe announced at a news conference that Russia and Iran obtained U.S. voter information in effort to influence the election. Iranian intelligence used the hacked information to send threatening emails to Democratic voters, falsely purporting to be from the far-right group Proud Boys.
The emails warned “we will come after you” if the recipients didn’t vote for Trump, per The Associated Press.
Meanwhile, Facebook took action against the three networks as part of its “coordinated inauthentic behavior” policy, which removes fake accounts that engage in disinformation campaigns.
The social media platform has reportedly removed more than 100 networks in the past three years, Washington Post reports.
“We know these actors are going to keep trying, but I think we are more prepared than we have ever been,” the company’s head of security policy, Nathaniel Gleicher, said on a call with reporters Tuesday.
“We’ve seen consistently that as it gets harder for these actors to keep their networks undetected for long periods of time, they are trying to play on our collective expectation of widespread interference to create the perception that they’re more impactful than they in fact are,” Gleicher said.
With Election Day fast approaching, Facebook is preparing for a surge in disinformation from both foreign and domestic agitators.
“We should all be conscious of the risk that malicious actors could use fictitious claims to target or suggest that election infrastructure has been compromised or election outcomes would be inaccurate in an effort to suppress voter turnout or erode trust in polling results, particularly in battleground states,” said Gleicher.
Have you subscribed totheGrio’s podcast“Dear Culture”? Download our newest episodes now!
The New York Times obtained the tax records of Donald Trump and most of the debt had to do with a Chicago real estate deal
President Donald Trump’s federal income taxes have revealed that Deutsche Bank and other lenders have forgiven about $287 million in debt that he did not pay for the first time since 2010. The New York Timesobtained the documents that confirmed it.
The vast majority of this unpaid debt was related to a 92-floor skyscraper in Chicago that was completed in 2008. Trump had hoped to turn the property into another one of his real estate marquee dwellings.
“We’re in love with the building,” Trump spoke at the time. “We’re very, very happy with what’s happened with respect to this building and how fast we put it up.”
However, the investment soon became a disappointment in the throes of the financial crisis. Trump attempted to back out of paying what he owed to the financial institutions that loaned him money. He owed Deutsche Bank $334 million. The New York Times reported on Tuesday that Trump was given an extended grace period to pay on the defaulted loans after asking for extensions.
Trump sued Deutsche Bank and Fortress in 2008 for “predatory lending practices” and demanded $3 Billion in damages. Deutsche Bank countersued but a private settlement was reached in 2010.
Trump still owed Deutsche Bank $99 million in 2012. Ultimately, the loans were forgiven rather than continue to fight Trump in court. The outlet described the president’s ability to forgo making payments on loans but convincing lenders to go easy on him.
“These were all arm’s length transactions that were voluntarily entered into between sophisticated parties many years ago in the aftermath of the 2008 global financial crisis and the resulting collapse of the real estate markets,” Alan Garten, the Trump Organization’s chief legal officer, said.
Garten insisted that the law and proper protocol was followed for the debt of the loans to be forgiven.
However, the transaction is now part of an investigation by the New York Attorney General Letitia James who initiated the probe last year. It does not appear Trump paid any federal taxes on the loans according to the analysis of Trump’s tax records by the NYT.
“[The Office of Attorney General] is currently investigating whether the Trump Organization and Donald J. Trump … improperly inflated the value of Mr. Trump’s assets on annual financial statements in order to secure loans and obtain economic and tax benefits. One particular focus of this inquiry, as relevant here, is whether the Trump Organization and its agents improperly inflated, or caused to be improperly inflated, the value of the Seven Springs estate,” Assistant Attorney General Matthew Colangelo wrote at the time.
Have you subscribed to theGrio’s podcast “Dear Culture”? Download our newest episodes now!
If you have ever had a book self-published through Amazon or similar fulfillment houses, chances are good that the physical book did not exist prior to the order being placed. Instead, that book existed as a PDF file, image files for cover art and author photograph, perhaps with some additional XML-based metadata indicating production instructions, trim, paper specifications, and so forth.
When the order was placed, it was sent to a printer that likely was the length of a bowling alley, where the PDF was converted into a negative and then laser printed onto the continuous paper stock. This was then cut to a precise size that varied minutely from page to page depending upon the binding type, before being collated and glued into the binding.
At the end of the process, a newly printed book dropped onto a rolling platform and from there to a box, where it was potentially wrapped and deposited automatically before the whole box was closed, labeled, and passed to a shipping gurney. From beginning to end, the whole process likely took ten to fifteen minutes, and more than likely no human hands touched the book at any point in the process. There were no plates to change out, no prepress film being created, no specialized inking mixes prepared between runs. Such a book was not "printed" so much as "instantiated", quite literally coming into existence only when needed.
It's also worth noting here that the same book probably was "printed" to a Kindle or similar ebook format, but in that particular case, it remained a digital file. No trees were destroyed in the manufacture of the ebook.
Such print on demand capability has existed since the early 2000s, to the extent that most people generally do not even think much about how the physical book that they are reading came into existence. Yet this model of publishing represents a profound departure from manufacturing as it has existed for centuries, and is in the process of transforming the very nature of capitalism.
Shortly after these printing presses came online, there were a number of innovations with thermal molded plastic that made it possible to create certain types of objects to exquisite tolerances without actually requiring a physical mold. Ablative printing techniques had been developed during the 1990s and involved the use of lasers to cut away at materials based upon precise computerized instruction, working in much the same that a sculptor chips away at a block of granite to reveal the statue within.
Additive printing, on the other hand, made use of a combination of dot matrix printing and specialized lithographic gels that would be activated by two lasers acting in concert. The gels would harden at the point of intersection, then when done the whole would be flushed with reagents that removed the "ink" that hadn't been fixed into place. Such a printing system solved one of the biggest problems of ablative printing in that it could build up an internal structure in layers, making it possible to create interconnected components with minimal physical assembly.
The primary limitation that additive printing faced was the fact that it worked well with plastics and other gels, but the physics of metals made such systems considerably more difficult to solve - and a great deal of assembly requires the use of metals for durability and strength. By 2018, however, this problem was increasingly finding solutions for various types of metals, primarily by using annealing processes that heated up the metals to sufficient temperatures to enable pliability in cutting and shaping.
What this means in practice is that we are entering the age of just in time production in which manufacturing exists primarily in the process ofdesigningwhat is becoming known as adigital twin. While one can argue that this refers to the use of CAD/CAM like design files, there's actually a much larger, more significant meaning here, one that gets right to the heart of an organization'sdigital transformation. You can think of digital twins as the triumph of design over manufacturing, and data and metadata play an oversized role in this victory.
At the core of such digital twins is the notion of a model. Amodel,in the most basic definition of the word, is a proxy for a thing or process. A runway model, for instance, is a person who is intended to be a proxy for the viewer, showing off how a given garment looks. An artist's model is a stand-in or proxy for the image, scene, or illustration that an artist is producing. An architectural model is a simulation of how a given building will look like when constructed, and with 3D rendering technology, such models can appear quite life-like. Additionally, though, the models can also simulate more than appearance - they can simulate structural integrity, strain analysis, and even chemistry interactions. We create models of stars, black holes, and neutron stars based upon our understanding of physics, and models of disease spread in the case of epidemics.
Indeed, it can be argued that the primary role of a data scientist is to create and evaluate models. It is one of the reasons that data scientists are in such increasing demand, the ability to build models is one of the most pressing that any organization can have, especially as more and more of a company's production exists in the form of digital twins.
There are several purposes for building such models: the most obvious is to reduce (or in some cases eliminate altogether) the cost of instantiation. If you create a model of a car, you can stress test the model, can get feedback from potential customers about what works and what doesn't in its design, can determine whether there's sufficient legroom or if the steering wheel is awkwardly placed, can test to see whether the trunk can actually hold various sized suitcases or packages, all without the cost of actually building it. You can test out gas consumption (or electricity consumption), can see what happens when it crashes, can even attempt to explode it. While such models aren't perfect (nor are they uniform), they can often serve to significantly reduce the things that may go wrong with the car before it ever goes into production.
However, such models, such digital twins, also serve other purposes. All too often, decisions are made not on the basis of what the purchasers of the thing being represented want, but what a designer, or a marketing executive, or the CEO of a company feel the customer should get. When there was a significant production cost involved ininstantiatingthe design, this often meant that there was a strong bias towards what the decision-maker greenlighting the production felt should work, rather than actually working with the stake-holders who would not only be purchasing but also using the product wanted. With 3D production increasingly becoming a reality, however, control is shifting from the producer to the consumer, and not just at the higher end of the market.
Consider automobile production. Currently, millions of cars are produced by automakers globally, but a significant number never get sold. They end up clogging lots, moving from dealerships to secondary markets to fleet sales, and eventually end up in the scrapyard. They don't get sold primarily because they simply don't represent the optimal combination of features at a given price point for the buyer.
The industry has, however, been changing their approach, pushing the consumer much closer to the design process before the car is actually even built. Colors, trim, engine type, seating, communications and entertainment systems, types of brakes, all of these and more can be can be changed. Increasingly, these changes are even making their way to the configuration of the chassis and carriage. This becomes possible because it is far easier to change the design of the digital twin than it is to change the physical entity, and that physical entity can then be "instantiated" within a few days of ordering it.
What are the benefits? You end up producing product upon demand, rather than in anticipation of it. This means that you need to invest in fewer materials, have smaller supply chains, produce less waste, and in general have a more committed customer. The downside, of course, is that you need fewer workers, have a much smaller sales infrastructure, and have to work harder to differentiate your product from your competitors. This is also happening now - it is becoming easier for a company such as Amazon to sell bespoke vehicles than ever before, because of that digitalization process.
This is in fact one of the primary dangers facing established players. Even today, many C-Suite managers see themselves in the automotive manufacturing space, or the aircraft production space, or the book publishing space. Yet ultimately, once you move to a stage where you have digital twins creating a proxy for the physical object, the actual instantiation - the manufacturing aspect - becomes very much a secondary concern.
Indeed, the central tenet of digital transformation is that everything simply becomes a publishing exercise. If I have the software product to build a car, then ultimately the cost of building that car involves purchasing the raw materials and the time on a 3D printer, then performing the final assembly. There is a growing "hobbyist' segment of companies that can go from bespoke design to finished product in a few weeks. Ordinarily the volume of such production is low enough that it is likely tempting to ignore what's going on, but between Covid-19 reshaping retail patterns, the diminishing spending power of Millennials and GenZers, and the changes being increasingly required by Climate Change, the bespoke digital twin is likely to eat into increasingly thin margins.
Put another way, existing established companies in many different sectors have managed to maintain their dominance both because they were large enough to dictate the language that described the models and because they could take advantage of the costs involved in manufacturing and production creating a major barrier to entry of new players. That's now changing.
Consider the first part of this assertion. Names are important. One of the realizations that has emerged in the last twenty years is that before two people or organizations can communicate with one another, they need to establish (and refine) the meanings of the language used to identify entities, processes, and relationships. An API, when you get right down to it, is a language used to interact with a system. The problem with trying to deal with intercommunication is that it is generally far easier to establish internal languages - the way that one organization defines its terms - than it is to create a common language. For a dominant organization in a given sector, this often also manifests as the desire to dominate the linguistic debate, as this puts the onus of changing the language (a timeconsuming and laborious process) into the hands of competitors.
However, this approach has also backfired spectacularly more often than not, especially when those competitors are willing to work with one another to weaken a dominant player. Most successful industry standards are pidgins - languages that capture 80-90% of the commonality in a given domain while providing a way to communicate about the remaining 10-20% that typifies the specialty of a given organization. This is the language of the digital twin, the way that you describe it, and the more that organizations subscribe to that language, the easier it is for those organizations to interchange digital twin components.
To put this into perspective, consider the growth of bespoke automobiles. One form of linguistic harmonization is the standardization of containment - the dimensions of a particular component, the location of ports for physical processes (pipes for fluids, air and wires) and electronic ones (the use of USB or similar communication ports), agreements on tolerances and so forth. With such ontologies in place, construction of a car's digital twin becomes far easier. Moreover, by adhering to these standards, linguistic as well as dimensional, you still get specialization at a functional level (for instance, the performance of a battery) while at the same time being able to facilitate containment variations, especially with digital printing technology.
As an ontology emerges for automobile manufacturing, this facilitates "plug-and-play" at a macro-level. The barrier to entry for creating a vehicle drops dramatically, though likely not quite to the individual level (except for well-heeled enthusiasts). Ironically, this makes it possible for a designer to create a particular design that meets their criterion, and also makes it possible for that designer to sell or give that IP to others for license or reuse. Now, if history is any indication, that will likely initially lead to a lot of very badly designed cars, but over time, the bad designers will get winnowed out by long-tail market pressures.
Moreover, because it becomes possible to test digital twins in virtual environments, the market for digital wind-tunnels, simulators, stress analyzers and so forth will also rise. That is to say, just as programming has developed an agile methodology for testing, so too would manufacturing facilitatedata agilitythat serves to validate designs. Lest this be seen as a pipe dream, consider that most contemporary game platforms can, with very little tweaking, be reconfigured for exactly this kind of simulation work, especially as GPUs increase in performance and available memory.
The same type of interoperability applies not just to the construction of components, but also to all aspects of resource metadata, especially with datasets. Ontologies provide ways to identify, locate and discover the schemas of datasets for everything from usage statistics to simulation parameters for training models. The design of that car (or airplane, or boat, or refrigerator) is simply one more digital file, transmissible in the same way that a movie or audio file is, and containing metadata that puts those resources into the broader context of the organization.
The long term impact on business is simple. Everything becomes a publishing company. Some companies will publish aircraft or automobiles. Others will publish enzymes or microbes, and still others will publish movies and video games. You still need subject matter expertise in the area that you are publishing into - a manufacturer of pastries will be ill-equipped to handle the publishing of engines, for instance, but overall you will see a convergence in the process, regardless of the end-product.
How long will this process take to play out? In some cases, it's playing out now. Book publishing is almost completely virtual at this stage, and the distinction between the physical object and the digital twin comes down to whether instantiation takes place or not. The automotive industry is moving in this direction, and drone tech (especially for military drones) have been shifting this way for years.
On the other hand, entrenched companies with extensive supply chains will likely adopt such digital twins approaches relatively slowly, and more than likely only at a point where competitors make serious inroads into their core businesses (or the industries themselves are going through a significant economic shock). Automobiles are going through this now, as the combination of the pandemic, the shift towards electric vehicles, and changing demographics are all creating a massive glut in automobile production that will likely result in the collapse of internal combustion engine vehicle sales altogether over the next decade along with a rethinking of the ownership relationship with respect to vehicles.
Similarly, the aerospace industry faces an existential crisis as demand for new aircraft has dropped significantly in the wake of the pandemic. While aircraft production is still a very high-cost business, the ability to create digital twins - along with an emergence of programming ontologies that make interchange between companies much more feasible - has opened up the market to smaller, more agile competitors who can create bespoke aircraft much more quickly by distributing the overall workload and specializing in configurable subcomponents, many of which are produced via 3D printing techniques.
Construction, likewise, is dealing with both the fallout due to the pandemic and the increasing abstractions that come from digital twins. The days when architects worked out details on paper blueprints are long gone, and digital twins of construction products are increasingly being designed with earthquake and weather testing, stress analysis, airflow and energy consumption and so forth. Combine this with the increasing capabilities inherent in 3D printing both full structures and custom components in concrete, carbon fiber and even (increasingly) metallic structures. There are still limitations; as with other large structure projects, the lack of specialized talent in this space is still an issue, and fabrication units are typically not yet built on a scale that makes them that useful for onsite construction.
Nonetheless, the benefits make achieving that scaling worthwhile. A 3D printed house can be designed, approved, tested, and "built" within three to four weeks, as opposed to six months to two years for traditional processes. Designs, similarly, can be bought or traded and modified, making it possible to create neighborhoods where there are significant variations between houses as opposed to the prefab two to three designs that tend to predominate in the US especially. Such constructs also can move significantly away from the traditional boxy structures that most houses have, both internally and externally, as materials can be shaped to best fit the design aesthetic rather than the inherent rectangular slabs that typifies most building construction.
Such constructs can also be set up to be self-aware, to the extent that sensors can be built into the infrastructure and viewscreens (themselves increasingly moving away from flatland shapes) can replace or augment the views of the outside world. In this sense, the digital twin of the instantiated house or building is able to interact with its physical counterpart, maintaining history (memory) while increasingly able to adapt to new requirements.
This feedback loop - the ability of the physical twin to affect the model - provides a look at where this technology is going. Print publishing, once upon a time, had been something where the preparation of the medium, the book or magazine or newspaper, occurred only in one direction - from digital to print. Today, the print resides primarily on phones or screens or tablets, and authors often provide live blog chapters that evolve in agile ways. You're seeing the emergence of processors such as FPGAs that configure themselves programmatically, literally changing the nature of the processor itself in response to software code.
It's not that hard, with the right forethought, to envision real world objects that can reconfigure themselves in the same way - buildings reconfiguring themselves for different uses or to adapt to environmental conditions, cars that can reconfigure its styling or even body shape, clothing that can change color or thermal profiles, aircraft that can be reconfigured for different uses within minutes, and so forth . This is reality in some places, though still piecemeal and one-offs, but the malleability of the digital twins - whether of office suites or jet engines - is the future of manufacturing.
The end state, likely still a few decades away, will be an economy built upon just-in-time replication and the importance of the virtual twin, where you are charged not for the finished product but the cost of the license to use a model, the material components, the "inks", for same, and the processing to go from the former to the latter (and back), quite possibly with some form of remuneration for recycled source. Moreover, as this process continues, more and more of the digital twin carries the burden of existence (tools that "learn" a new configuration are able to adapt to that configuration at any time). The physical and the virtual become one.
Some may see the resulting society as utopian, others as dystopian, but what is increasingly unavoidable is the fact that this is the logical conclusion of the trends currently at work (for some inkling of what such a society may be like, I'd recommend readingThe Diamond Ageby Neal Stevenson, which I believe to be very prescient in this regard).
from Featured Blog Posts - Data Science Central https://ift.tt/2TwPouM
via Gabe's MusingsGabe's Musings
The victim was found in the trunk ‘wrapped in a piece of fabric and in an advanced stage of decomposition.’
A Virgina man who was reported missing nearly two weeks ago was found dead and decomposing in his friend’s trunk in Florida after a car crash on a highway.
Brian Trotter, 25, known locally on the hip-hop scene as “Kent Don’t Stop,” had been missing since October 17th, PEOPLE reports. His best friend of over a decade, 25-year-old Robert Avery Coltrain, was arrested this week after Troopers with the Florida Highway Patrol discovered Trotter’s body in the trunk of Coltrain’s crashed silver Acura.
The accident occurred Sunday afternoon on the Palmetto Expressway near the Miami Lakes area, according to the report. When officers responded to the scene they noticed Coltrain removing a Glock gun case from the car as well as flies buzzing around the trunk and the smell of decomposition.
They popped the trunk and inside found Trotter’s body “wrapped in a piece of fabric and in an advanced stage of decomposition”.
According to the Associated Press, Coltrain was arrested and charged with second-degree murder and one count of illegal transport of human remains.
Police confirmed that Trotter died from multiple gunshot wounds but have not suggested a motive for the killing.
After his arrest, Coltrain was allowed his one call, and he allegedly phoned Trotter’s sister to apologize for the killing, which he said happened in Virginia.
Trotter’s family claimed the two friends were headed to Washington, D.C., presumably to take promotional photos for their music.
“No one can understand what happened,” Trotter’s father told the Miami Herald. “Hopefully, police can shed a light on what made a friend of over 10 years decide to commit something like that.”
In a post shared on a Facebook page, Trotter’s family confirmed his passing and thanked family and friends for their support.
“It is with heavy hearts that we tell you all that Brian was found deceased,” their statement reads, in part. “We are grateful for your love and support over the last 9 days. Your concern for Brian has lifted us up and is a testament to the light he shined on every one who knew him. In the coming days we ask for your thoughts and prayers and for privacy as we grieve, and as the police thoroughly investigate Brian’s death.”
Have you subscribed totheGrio’s podcast“Dear Culture”? Download our newest episodes now!
A new remote work program is inviting high-earning travelers and their families to live and work in the Cayman Islands for up to two years. It's one of the few ways people can access the Cayman Islands, since the territory has not reopened to tourists.
from Wealth https://ift.tt/35JKqQU
via Gabe's MusingsGabe's Musings
‘I needed to say something before I could move on from this.’
Chrissy Teigen is opening up about losing her unborn child at 20 weeks.
Earlier this month, the former supermodel suffered a miscarriage of her third child with husband John Legend. They named their unborn son Jack.
theGRIO previously reported, Teigen posted a heartbreaking photo of herself in a hospital room, sitting on the edge of a bed with tears streaming down her face. In the caption, she announced that the couple’s son had died.
In an essay published this week on Medium, Teigen thanked friends, colleagues and fans for showering her with love and support during this devastating experience.
“Notes have flooded in and have each been read with our own teary eyes,. Social media messages from strangers have consumed my days, most starting with, ‘you probably won’t read this, but…. I can assure you, I did,” she writes.
She then recalls being in the hospital maternity ward, admitting “I had already come to terms with what would happen: I would have an epidural and be induced to deliver our 20 week old, a boy that would have never survived in my belly (please excuse these simple terms).”
Teigen and Legend are parents to two other children: Luna Simone, 4, and Miles Theodore, 2, both were conceived via in vitro fertilization due to Teigen’s fertility challenges.
For her third pregnancy, she was diagnosed with “partial placenta abruption,” and received blood transfusions to save her child, but ultimately “my doctor told me exactly what I knew was coming — it was time to say goodbye,” writes Teigen.. “He just wouldn’t survive this, and if it went on any longer, I might not either.”
At Teigen’s insistence, Legend reluctantly snapped photos of his wife in the hospital after losing their child, “no matter how uncomfortable it was,” she writes.
“I explained to a very hesitant John that I needed them, and that I did NOT want to have to ever ask. That he just had to do it. He hated it. I could tell. It didn’t make sense to him at the time. But I knew I needed to know of this moment forever, the same way I needed to remember us kissing at the end of the aisle, the same way I needed to remember our tears of joy after Luna and Miles. And I absolutely knew I needed to share this story,” she explains in the Medium essay.
Teigen then made time to clap back at critics of the post-miscarriage photos that she shared on social media.
“I cannot express how little I care that you hate the photos,” she writes to the haters. “How little I care that it’s something you wouldn’t have done. I lived it, I chose to do it, and more than anything, these photos aren’t for anyone but the people who have lived this or are curious enough to wonder what something like this is like. These photos are only for the people who need them. The thoughts of others do not matter to me.”
Teigen concluded by explaining that she decided to pen the essay “because I knew for me I needed to say something before I could move on from this and return back to life, so I truly thank you for allowing me to do so.”
Have you subscribed totheGrio’s podcast“Dear Culture”? Download our newest episodes now!
"Fintech" describes the new technology integrated into various spheres to improve and automate all aspects of financial services provided to individuals and companies. Initially, this word was used for the tech behind the back-end systems of big banks and other organizations. And now it covers a wide specter of finance-related innovations in multiple industries, from education to crypto-currencies and investment management.
While traditional financial institutions offer a bundle of services, fintech focuses on streamlining individual offerings, making them affordable, often one-click experience for users. This impact can be described with the word "disruption" - and now, to be competitive, banks and other conventional establishments have no choice but to change entrenched practices through cooperation with fintech startups. A vivid example is Visa'spartnershipwith Ingo Money to accelerate the process of digital payments. Despite the slowdown related to the Covid-19 epidemic, the fintech industry will recover momentum and continue to change the finance world's face.
Fintech users
Fintech users fall into four main categories. Such trends as mobile banking, big data, and unbundling of financial services will create an opportunity for all of them to interact in novel ways:
B2B - banks and their business clients
B2C - small enterprises and individual consumers
The main target group for consumer-oriented fintech is millennials - young, ready to embrace digital transformation, and accumulating wealth.
What needs do they have?According to the Credit Carmasurvey, 85% of millennials in the USA suffer from burnout syndrome and have no energy to think about managing their personal finances. Therefore, any apps that automate and streamline these processes have a good chance to become popular. They need an affordable personal financial assistant that can do the following 24/7:
Present an overview of their current financial situation
Provide coaching and improve financial literacy
What they expect to achieve:
Stop overspending (avoid late bills, do smart shopping with price comparison, cancel unnecessary subscriptions, etc.)
Develop saving habits, get better organized
Invest money (analyze deposit conditions in different banks, form an investment portfolio, etc.)
The fintech industry offers many solutions that can meet all these goals - not only on an individual but also on a national scale. However, in many countries, there is still a high percentage of unbanked people - not having any form of a bank account. According to theWorld Bank report, this number was 1.7 billion people in 2017. Mistrust to new technologies, poverty, and financial illiteracy are the obstacles for this group to tap into the huge potential of fintech. Therefore, businesses and governments must direct the inclusion efforts towards this audience as all stakeholders will benefit from it. Apparently, affordable and easy-to-get fintech services customized for this huge group of first-time users will be a big trend in the future.
Big Data, AI, ML in Fintech
According to an Accenturereport, AI integration will boost corporate profits in many industries, including fintech, by almost 40% by 2035, which equals staggering $14 trillion. Without a doubt, Big Data technologies, such as Streaming Analytics, In-memory computing, Artificial Intelligence, and Machine Learning, will be the powerhouse behind numerous business objectives banks, credit unions, and other institutions strive to achieve:
Aggregate and interpret massive amounts of structured and unstructured data in real-time.
With the help of predictive analytics, make accurate future forecasts, identify potential problems (e.g., credit scoring, investment risks)
Build optimal strategies based on analytical reports
Facilitate decision-making
Segment clients for more personalized offers and thus increase retention.
Detect suspicious behavior, prevent identity fraud and other types of cybercrime, make transactions more secure with such technologies as face and voice recognition.
Find and extend new borrower pools among the no-file/thin-file segment, widely represented by Gen Z (the successors of millennials), who lack or have a short credit history.
Automate low-value tasks (e.g., such back-office operations as internal technical requests)
Cut operational expenses by streamlining processes (e.g., image recognition algorithms for scanning, parsing documents, and taking further actions based on regulations) and reducing man-hours.
Considerably improve client experience with conversational user interfaces, available 24/7, and capable of resolving any issues instantly. Conversational banking is used by many big banks worldwide; some companies integrate financial chatbots for processing payments in social media.
Neobanks
Digital or internet-only banks do not have brick-and-mortar branches and operate exclusively online. The word neobank became widely used in 2017 and referred to two types of app-based institutions - those that provided financial services with their own banking license and those partnering with traditional banks. Wasting time in lines and paperwork - this inconvenience is the reason why bank visits arepredictedto fall to just four visits a year by 2022. Neobanks, e.g., Revolut, Digibank, FirstDirect, offer a wide range of services - global payments and P2P transfers, virtual cards for contactless transactions, operations with cryptocurrencies, etc., and the fees are lower than with traditional banks. Clients get support through in-app chat. Among the challenges associated with digital banking are higher susceptibility to fraud and lower trustworthiness due to the lack of physical address. In the US, the development of neobanks faced regulatory obstacles. However, the situation is changing for the better.
Smart contracts
A smart contract is a software that allows automatic execution and control of agreements between buyers and sellers. How does it work? If two parties want to agree on a transaction, they no longer need a paper document and a lawyer. They sign the agreement with cryptographic keys digitally. The document itself is encoded in a tamper-proof manner. The role of witnesses is performed by a decentralized blockchain network of computing devices that receive copies of the contract, and the code guarantees the fulfillment of its provisions, with all transactions transparent, trackable, and irreversible. This sky-high level of reliability and security make any fintech operation possible in any spot of the world, any time. The parties to the contract can be anonymous, and there is no need for other authorities to regulate or enforce its implementation.
Open banking
Open banking is a system that allows third parties to access bank and non-bank financial institutions data through APIs (application programming interfaces) to create a network. Third-party service providers, such as tech startups, upon user consent, aggregate these data through apps and apply them to identify, for instance, the best financial products, such as savings account with the highest interest rate. Networked accounts will allow banks to accurately calculate mortgage risks and offer the best terms to low-risks clients. Open banking will also help small companies save time with online accounting and will play an important role in fraud detection. Services likeMintrequire users to provide credentials for each account, although such practice has security risks, and data processing is not always accurate. ÀPIs are a better option as they allow direct data sharing without accessing login and password. Consumer security is still compromised, and this is one of the main reasons why the open banking trend hasn't taken off yet. Many banks worldwide cannot provide open APIs of sufficient quality to meet existing regulatory standards. There are still a lot of blind spots, including those related to technology. However, open banking is a promising trend. TheAccenture reportoffers many interesting insights.
Blockchain and cryptocurrencies
The distributed ledger technology - Blockchain, which is the basis of many cryptocurrencies, will continue to transform the face of global finance, with the US and China being global adoption leaders. The most valuable feature of a blockchain database is that data cannot be altered or deleted once it has been written. This high level of security makes it perfect for big data apps across various sectors, including healthcare, insurance, energy, banking, etc., especially those dealing with confidential information. Although the technology is still in the early stages of its development and will eventually become more suited to the needs of fintech, there are already Blockchain-based innovative solutions both from giants, likeMicrosoftand IBM, and numerous startups. The philosophy of decentralized finance has already given rise to a variety of peer to peer financing platforms and will be the source of new cryptocurrencies, perhaps even national ones. Blockchain considerably accelerates transactions between banks through secure servers, and banks use it to build smart contracts. The technology is also growing in popularity with consumers. Since 2009, when Bitcoin was created, the number of Blockchain wallet users has reached52 million. A wallet is a layer of security known as "tokenization"- payment information is sent to vendors as tokens to associate the transaction with the right account.
Regtech
Regtech or regulation technology is represented by a group of companies, e.g., IdentityMind Global, Suade, Passfort, Fund Recs, providing AI-based SaaS solutions to help businesses comply with regulatory processes. These companies process complex financial data and combine them with information on previous regulatory failures to detect potential risks and design powerful analytical tools. Finance is a conservative industry, heavily regulated by the government. As the number of technology companies providing financial services is increasing, problems associated with compliance with such regulations also multiply. For instance, processes automation makes fintech systems vulnerable to hacker attacks, which can cause serious damage. Responsibility for such security breaches and misuse of sensitive data, prevention of money laundering, and fraud are the main issues that concern state institutions, service providers, and consumers. Therewill be over 2.6 billionbiometric users of payment systems by 2023, so the regtech application area is huge.
In the EU, PSD2 and SCA aim to regulate payments and their providers. Although these legal acts create some regulatory obstacles for fintech innovations, the European Commission also proposes a series of alleviating changes, for instance, taking off the table paper documents for consumers. In the US, fintech companies must comply with outdated financial legislation. The silver lining is the newFedNowservice for instantaneous payments, which is likely to be launched in 2023–2024 and provides a ready public infrastructure.
Insuretech
The insurance industry, like many others, needs streamlining to be more efficient and cost-effective and meet the demand of time. Insurtech companies are exploring new possibilities, such as ultra-customization of policies, behavior-based dynamic premium pricing, based on data from Internet-enabled devices, such as GPS navigators and fitness activity trackers, AI brokerages, on-demand insurance for micro-events, etc., through a new generation of smart apps. As we mentioned before, the insurance business is also subject to strict government regulations, and it requires close cooperation of traditional insurers and startups to make a breakthrough that will benefit everyone.
Industry experts from TikTok, Microsoft, and more talk latest trends on cybersecurity & public policy.
Enterprise Ireland, Ireland’s trade and innovation agency, hosted a virtual Cyber Security & Public Policy panel discussion with several industry-leading experts. The roundtable discussion allowed cybersecurity executives from leading organizations to come together and discuss The Nexus of Cyber Security and Public Policy.
The panel included Roland Cloutier, the Global Chief Security Officer of TikTok, Ann Johnson, the CVP of Business Development - Security, Compliance & Identity at Microsoft, Richard Browne, the Director of Ireland’s National Cyber Security Centre, and Melissa Hathaway, the President of Hathaway Global Strategies LLC who formerly spearheaded the Cyberspace Policy Review for President Barack Obama and lead the Comprehensive National Cyber Security Initiative (CNCI) for President George W. Bush.
Panelists discussed the European Cloud and the misconception companies have of complete safety and security when migrating to the Cloud and whether it is a good move for a company versus a big mistake. Each panelist also brought valuable perspective and experience to the table on other discussion topics including cyber security’s recent rapid growth and changes; the difference between U.S. and EU policies and regulations; who holds the responsibility for protecting consumer data and privacy; and more.
“As more nations and states continue to improve upon cybersecurity regulations, the conversation between those developing policy and those implementing it within the industry becomes more important,” said Aoife O’Leary, Vice President of Digital Technologies, Enterprise Ireland. “We were thrilled to bring together this panel from both sides of the conversation and continue to highlight the importance of these discussions for both Enterprise Ireland portfolio companies and North American executives and thought leaders.”
This panel discussion was the second of three events in Enterprise Ireland’s Cyber Demo Day 2020 series, inclusive of over 60 leading Irish cyber companies, public policy leaders, and cyber executives from many of the largest organizations in North America and Ireland.
To view a recording of the Cyber Security & Public Policy Panel Discussion from September 23rd, please click here.
###
About Enterprise Ireland
Enterprise Ireland is the Irish State agency that works with Irish enterprises to help them start, grow, innovate, and win export sales in global markets. Enterprise Ireland partners with entrepreneurs, Irish businesses, and the research and investment communities to develop Ireland's international trade, innovation, leadership, and competitiveness. For more information on Enterprise Ireland, please visit https://enterprise-ireland.com/en/.
from Featured Blog Posts - Data Science Central https://ift.tt/37KUTOD
via Gabe's MusingsGabe's Musings
Obtaining good quality data can be a tough task. An organization may face quality issues when integrating data sets from various applications or departments or when entering data manually.
Here are some of the things a company can do to improve the quality of the information it collects:
1. Data Governance plan
A good data governance plan should not only talk about ownership, classifications, sharing, and sensitivity levels plus also follows in detail with procedural details that outline your data quality goals. It should also have the details of all the personnel involved in the process and each of their roles and more importantly a process to resolve/work through issues.
2. Data Quality Guidance
You should also have a clear guide to use when separating good data from bad data. You will have to calibrate your automated data quality systemwith this information, so you need to have it laid out beforehand.
3. Data Cleansing Process
Data correction is the whole point of looking for flaws in your datasets. Organizations need to provide guidance on what to do with specific forms of bad data and identifying what’s critical and common across all organizational data silos. Implementing a data cleansing manually is cumbersome as the business shifts, strategies dictate the change in data and the underlying process.
4. Clear Data Lineage
With data flowing in from different departments and digital systems, you need to have a clear understanding of data lineage – how an attribute is transformed from system to system interactions and provide the ability to build trust and confidence.
5. Data Catalog and Documentation
Improving data quality is a long-term process that you can streamline using both anticipations and past findings. By documenting every problem that is detected and associated data quality score to the data catalog, you reduce the risk of mistake repetition and solidify your data quality enhancement regime with time.
As stated above, there is just too much data out there to incorporate into your business intelligence strategy. The data volumes are building up even more with the introduction of new digital systems and the increasing spread of the internet. For any organization that wants to keep up with the times, that translates to a need for more personnel, from data curators and data stewards to data scientists and data engineers. Luckily, today’s technology and AI/ML innovation allow for even the least tech-savvy individuals to contribute to data management at the east. Organizations should leverage these analytics augmented data quality and data management platforms like DQLabs.ai to recognize immediate ROI and longer cycles of implementation.
from Featured Blog Posts - Data Science Central https://ift.tt/31KFqdG
via Gabe's MusingsGabe's Musings
For the last few years, I have read the free state of AI report
Here are the list of insights which I found interesting
The full report and the download link is at the end of this article
AI research is less open than you think: Only 15% of papers publish their code
Facebook’s PyTorch is fast outpacing Google’s TensorFlow in research papers, which tends to be a leading indicator of production use down the line
PyTorch is also more popular than TensorFlow in paper implementations on GitHub
Language models: Welcome to the Billion Parameter club
Huge models, large companies and massive training costs dominate the hottest area of AI today, NLP.
Bigger models, datasets and compute budgets clearly drive performance
Empirical scaling laws of neural language models show smooth power-law relationships, which means that as model performance increases, the model size and amount of computation has to increase more rapidly.
Tuning billions of model parameters costs millions of dollars
Based on variables released by Google et al., you’re paying circa $1 per 1,000 parameters. This means OpenAI’s 175B parameter GPT-3 could have cost tens of millions to train. Experts suggest the likely budget was $10M.
We’re rapidly approaching outrageous computational, economic, and environmental costs to gain incrementally smaller improvements in model performance
Without major new research breakthroughs, dropping the ImageNet error rate from 11.5% to 1% would require over one hundred billion billion dollars! Many practitioners feel that progress in mature areas of ML is stagnant.
A larger model needs less data than a smaller peer to achieve the same performance
This has implications for problems where training data samples are expensive to generate, which likely confers an advantage to large companies entering new domains with supervised learning-based models.
Even as deep learning consumes more data, it continues to get more efficient
Since 2012 the amount of compute needed to train a neural network to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months.
A new generation of transformer language models are unlocking new NLP use-cases
GPT-3, T5, BART are driving a drastic improvement in the performance of transformer models for text-to-text tasks like translation, summarization, text generation, text to code.
NLP benchmarks take a beating: Over a dozen teams outrank the human GLUE baseline
It was only 12 months ago that the human GLUE benchmark was beat by 1 point. Now SuperGLUE is in sight.
What’s next after SuperGLUE? More challenging NLP benchmarks zero-in on knowledge
A multi-task language understanding challenge tests for world knowledge and problem solving ability across 57 tasks including maths, US history, law and more. GPT-3’s performance is lopsided with large knowledge gaps.
The transformer’s ability to generalise is remarkable. It can be thought of as a new layer type that is more powerful than convolutions because it can process sets of inputs and fuse information more globally.
For example, GPT-2 was trained on text but can be fed images in the form of a sequence of pixels to learn how to autocomplete images in an unsupervised manner.
Biology is experiencing its “AI moment”: Over 21,000 papers in 2020 alone
Publications involving AI methods (e.g. deep learning, NLP, computer vision, RL) in biology are growing >50% year-on-year since 2017. Papers published since 2019 account for 25% of all output since 2000.
From physical object recognition to “cell painting”: Decoding biology through images
Large labelled datasets offer huge potential for generating new biological knowledge about health and disease.
Deep learning on cellular microscopy accelerates biological discovery with drug screens
Embeddings from experimental data illuminate biological relationships and predict COVID-19 drug successes.
Ophthalmology advances as the sandbox for deep learning applied to medical imaging
After diagnosis of ‘wet’ age-related macular degeneration (exAMD) in one eye, a computer vision system can predict whether a patient’s second eye will convert from healthy to exAMD within six months. The system uses 3D eye scans and predicted semantic segmentation maps.
AI-based screening mammography reduces false positives and false negatives in two large, clinically-representative datasets from the US and UK
The AI system, an ensemble of three deep learning models operating on individual lesions, individual breasts and the full case, was trained to produce a cancer risk score between 0 and 1 for the entire mammography case. The system outperformed human radiologists and could generalise to US data when trained on UK data only.
Causal reasoning is a vital missing ingredient for applying AI to medical diagnosis
Existing AI approaches to diagnosis are purely associative, identifying diseases that are strongly correlated with a patient’s symptoms. The inability to disentangle correlation from causation can result in suboptimal or dangerous diagnoses.
Model explainability is an important area of AI safety: A new approach aims to incorporate causal structure between input features into model explanations
A flaw with Shapley values, one current approach to explainability, is that they assume the model’s input features are uncorrelated. Asymmetric Shapley Values (ASV) are proposed to incorporate this causal information.
Reinforcement learning helps ensure that molecules you discover in silico can actually be synthesized in the lab. This helps chemists avoid dead ends during drug discovery.
RL agent designs molecules using step-wise transitions defined by chemical reaction templates.
American institutions and corporations continue to dominate NeurIPS 2019 papers
Google, Stanford, CMU, MIT and Microsoft Research own the Top-5.
The same is true at ICML 2020: American organisations cement their leadership position
The top 20 most prolific organisations by ICML 2020 paper acceptances further cemented their position vs. ICML 2019. The chart below shows their Publication Index position gains vs. ICML 2019.
Demand outstrips supply for AI talent
Analysis of Indeed.com US data shows almost 3x more job postings than job views for AI-related roles. Job postings grew 12x faster than job viewings in the last from late 2016 to late 2018.
US states continue to legislate autonomous vehicles policies
Over half of all US states have enacted legislation to related to autonomous vehicles.
Even so, driverless cars are still not so driverless: Only 3 of 66 companies with AV testing permits in California are allowed to test without safety drivers since 2018
The rise of MLOps (DevOps for ML) signals an industry shift from technology R&D (how to build models) to operations (how to run models)
25% of the top-20 fastest growing GitHub projects in Q2 2020 concern ML infrastructure, tooling and operations. Google Search traffic for “MLOps” is now on an uptick for the first time.
As AI adoption grows, regulators give developers more to think about
External monitoring is transitioning from a focus on business metrics down to low-level model metrics. This creates challenges for AI application vendors including slower deployments, IP sharing, and more:
Berkshire Grey robotic installations are achieving millions of robotic picks per month
Supply chain operators realise a 70% reduction in direct labour as a result.
Multiple countries and states start to wrestle with how to regulate the use of ML in decision making.
GPT-3, like GPT-2, still outputs biased predictions when prompted with topics of religion
Example from the GPT-3 (left) and GPT-2 (right) with prompts and the model’s predictions, which contain clear bias. Models trained on large volumes of language on the internet will reflect the bias in those datasets unless their developers make efforts to fix this. See our coverage in State of AI Report 2019 of how Google adapted their translation model to remove gender bias.