BRN 0.00% 31.0¢ brainchip holdings ltd

This tweet got me thinking about FinTech again. Nothing concrete...

  1. 7,397 Posts.
    lightbulb Created with Sketch. 1955
    This tweet got me thinking about FinTech again. Nothing concrete here - just tenuous 'dot joining':



    Buried in the middle there is BBVA, a name that's come up a few times in my researching. Based in Spain. Mr Coello is based in Spain - coincidence? Probably.

    Read this from BVAA Labs, the AI research arm of BBVA. They're basically complaining about the limitations of Google’s Tensorflow. BRN to the rescue?

    The article is huge and too much for my old brain but the tech-heads might enjoy reading the whole thing:
    https://www.bbva.com/en/conclusions-distributed-neural-networks-tensorflow/

    Conclusions about distributed neural networks with Tensorflow


    Neural networks training is a time consuming activity, the amount of computation needed is usually high even for today standards. There are two ways to reduce the time needed, use more powerful machines or use more machines.

    The first approach can be achieved using dedicated hardware like GPUs or maybe FPGAs or TPUs in the future. But it can also be done by splitting the task between more general purpose hardware like the one used in cloud systems.

    This document summarizes the conclusions reached after researching the use of distributed neural networks.

    The most common approach to neural network training in the academic community is based in dedicated hardware packed with an increasing number of more powerful GPUs. Probably acquiring a small number of these machines for research purposes is not a specially challenging task. But the purchase of a big number of them in a multinational financial institution can take a while. So a way to gather a similar amount of processing power with already purchased general purpose hardware can be appealing if the goal is to be agile and be able to respond to market needs fast.

    Another limitation of having a small yet powerful machines is that it may not be possible to run many processes in parallel. It may be interesting to run more than one at the same time even if they are a bit slower.

    The size of the data collected by modern systems makes the task of moving them to the dedicated hardware problematic. In that sense it may be a good idea to move the processing power closer to the data. This is the approach used in Spark and similar systems. Right now there are no good ML libraries on top of Spark.

    Another point against the dedicated hardware is simple flexibility, the same general purpose hardware can be used for other purposes instead of sitting idle when no training is needed.

    To achieve maximum flexibility and adhering to the distributed computing principles the software should be deployed in containers (docker, rkt).

    The previous considerations leads to the idea of investing time in trying to see if it was viable the idea of using distributed neural networks. In this post we focus in one Machine Learning framework capable of doing it: Google’s Tensorflow.

    The goal of the project was to develop an architecture as flexible as possible, it must be possible to run different kinds of models to solve different kinds of problems. So the systems must be as data agnostic as possible.

    The system should not store data locally if possible. The rationale of this is again related with flexibility. This is against the actual trend of moving the code to the data. But there are cases where it may be a good idea, for example when the data comes in streaming. Banking operations or business events come in a stream. This approach can be useful to produce systems that are constantly learning from the live stream of events.

    A machine learning system capable of learn in real time from a live feed of events, capable of scale to match the changing load of events is an appealing and challenging goal worth trying.

    https://www.bbva.com/en/conclusions-distributed-neural-networks-tensorflow/


    The there's this article that mentions 'patterns' 6 times. That's what AKIDA does, so at a minimum, it's a good example of where AKIDA might be used. Be sure to read the paragraph under the "Big data based recommendations" heading.

    'Machine learning': intelligence that learns by itself

    Below a mountain of ‘big data’ lie simple laws that allow you to define patterns. ‘Machine learning’ uses them to improve the lives of human beings.

    Machine learning – automated learning – allows machines to learn without being expressly programmed.  This learning ability is essential to develop smart systems, capable of identifying patterns and turning data into forecasts.

    “When you are programming, you tell the data what the next step is. With machine learning that control is reversed: it’s the data that tell you what the next step will be,” explains Keepcoding co-founder Fernando Rodríguez, speaking at ‘Big Data & Machine Learning. Millions of datum, Endless Possibilities’, an event held at BBVA’s innovation Center.

    The term ‘machine learning’ is used to refer to the ability of systems to generate their own algorithms based on the data and results we want to find out about.  “Underneath a mountain of data lies a simple law that explains behaviors and allows defining a pattern“, explained the expert.

    The rise of the machines: Three reason humans are irreplaceable


    Cristóbal Sepúlveda, Technical Architect at BBVA, said at the event that this branch of artificial intelligence (AI), machine learning, “allows the bank to solve problems without having to re-program its systems.”  These systems created with the help of machine learning not only provide answers, but are also learning to ask smart questions to turn data into forecasts and formulate their own hypotheses.

    According to the BBVA expert, AI also allows the bank to get to know its customers much better, and thus deliver increasingly “better services and experiences.” “It also helps us as employees, because by having much more powerful tools, we are now able to create much more innovative solutions,” he said.

    The years of trial and error

    One of the defining aspects of machine learning is that it requires tremendous amounts of both data and processing power. And that is something that was just not available in the 1980s, as Keepcoding expert Rodríguez recalls: “In addition to shoulder pads and the A-Team, artificial intelligence caused quite a stir in that decade, but the hype faded away nobody was able to deliver on the unrealistic promises that were thrown around.”

    Indeed, many experts refer to the 80’s as the “AI Winter“. During these years, said Rodriguez, three factors led to the demise of machine learning: First, the approach to the problem was inappropriate: people tried to use highly advanced algorithms to solve very simple problems. Second, the overall lack of processing power meant that computers were just not able to implement AI algorithms. And third, the lack of data availability and storage capacity.

    But all this changed with the advent of the internet: “Data generation, in terms of volume, speed and variety, exploded.” This led to an “avalanche of data” that, as noted by Rodríguez, is being used, in the case of banks, to assess risks or detect fraudulent use of credit cards, for example. “Now you can detect patterns and predict future behaviors, something that requires having tons of data,” he said.

    Big data based recommendations

    Sepúlveda also exposed an actual use case of this technology: “At BBVA, we developed a service recommendation engine for bank users,” With this proposal, what we are trying to do is offer the best commercial offer depending on the most used transactions by the user and their navigation patterns. All this information is processed in a classification algorithm which then generates a recommendation. “The volume of information is incredibly vast and the only way to offer a recommendation is using machine learning technologies”, he noted.

    Although the benefits are great, there are also risks to the use of machine learning. One of them is “overtraining,” which causes systems to exclusively detect the patterns they encounter during training phases. Rodriguez explained the problem through an anecdote: Allegedly, the US Army once developed a tank detection software that worked flawlessly in the laboratory. “However, when they tried take it to the real world, they realized that something was failing: it was unable to detect tanks,” explains the expert. The problem was that he had been trained with images of tanks under cloudy skies. That turned the software into “a fabulous cloud detector” when used outdoors.

    https://www.bbva.com/en/machine-learning-intelligence-learns/
 
watchlist Created with Sketch. Add BRN (ASX) to my watchlist
(20min delay)
Last
31.0¢
Change
0.000(0.00%)
Mkt cap ! $572.2M
Open High Low Value Volume
31.0¢ 32.0¢ 30.5¢ $1.046M 3.347M

Buyers (Bids)

No. Vol. Price($)
34 781660 30.5¢
 

Sellers (Offers)

Price($) Vol. No.
31.0¢ 180209 5
View Market Depth
Last trade - 16.10pm 29/03/2024 (20 minute delay) ?
Last
30.8¢
  Change
0.000 ( 1.16 %)
Open High Low Volume
31.0¢ 32.0¢ 30.5¢ 2790985
Last updated 15.59pm 29/03/2024 ?
BRN (ASX) Chart
arrow-down-2 Created with Sketch. arrow-down-2 Created with Sketch.