Dell Latitude E6410 Notebook| Quantity Available: 40+
This post is intended for businesses and other organizations interested... Read more →
Posted by Richy George on 24 August, 2022This post was originally published on this site
Kissflow Work Platform provides a collection of tools to create, manage, and track workflows. It has five core modules to handle any type of work that moves your way: processes, projects, cases, datasets, and collaboration, although projects and cases are currently being merged into a new module, boards.
With both no-code and low-code tools, Kissflow promises to extend application development to your entire organization. The company also offers a BPM platform, not covered in this review.
Kissflow comes with more than 200 pre-built templates. You can install these templates from the Kissflow marketplace. After installing them, you have complete control to configure them to your needs. Examples of templates include sales enquiry, purchase request, employee transfer request, purchase catalog, software directory, lead qualification, sales pipeline, customer onboarding, IT help desk, bug tracking, and incident management.
Kissflow was renamed from OrangeScape Technologies in 2019. The company is headquartered in Chennai, India. Kissflow claims to have a million users in 10 thousand companies distributed over 160 countries.
In addition to the platform, Kissflow offers paid training and consulting, which extends to building out your entire network of interconnected processes. Kissflow also has a partner reseller program.
As we’ll see, Kissflow has no-code modules aimed at non-programmers, and low-code modules (apps) aimed at IT. The user experience for each has been tailored to their target audiences, which makes them quite different.
There were over 400 vendors in the no-code and low-code development space the last time I looked. Gartner covers about 250 of them. Kissflow counts Microsoft Power Apps and Google Cloud AppSheet as competitors for its no-code modules, and OutSystems, Mendix, and Appian as competitors for its low-code modules.
A Kissflow form is an entity with which you can collect input from the people who participate in your flow. Forms are predominantly used in flows like processes, projects, and cases in Kissflow. A form has three primary components—section, field, and table. In addition, forms have workflows and permissions, and fields can be attached to computations.
A process is a type of workflow that ensures a strict sequential set of steps performed on form data. Flow admins for a process can set up a form to carry data, and then make a predefined path for it to follow. The system automatically routes the requests through various steps until the item is complete. Processes are a great fit in places where you would want strict control and efficiency.
The screenshots above show the forms and flow for a press release request. Common processes include vacation requests, purchase request, employee onboarding, budget approval requests, visitor pass requests, and vendor enrollments.
Case systems are useful for support requests, incident management, service requests, bug tracking, help desks (as shown below), sales pipelines, customer onboarding, HR help desks, and facility service requests. Cases support both list and board views.
Kissflow projects are adaptable and support various project management methodologies such as value stream mapping, work breakdown structure, and iterative incremental development. Projects use Kanban boards as their default visualization, and also support list and matrix visualizations.
Projects and case systems are currently being combined into a new module, boards.
A dataset is a collection of tabular data that can be used in your flows. Forms in your flows can look up information in your datasets or use the information for advanced assignment logic.
A view is a subset of your dataset. You can create views to restrict access to certain parts of a dataset.
An integration consists of a series of sequential steps that indicate how data should be transferred and transformed between different Kissflow applications or other third-party applications. Each of these steps are built using connectors.
Any integration starts with a trigger—an event in one of your connectors that kick-starts your workflow. It pushes data from the flow to complete one or many connector actions.
Overall, Kissflow has a good selection of low-code and no-code capabilities, even though its Cases and Projects modules are currently in flux. Combining those two no-code modules into a single Boards module does seem like a good idea, as deciding whether you need a Case or Project system at the beginning of your development effort can be challenging, especially if you’re new to Kissflow.
I initially criticized the separation of Kissflow’s no-code modules and low-code apps into different development systems with inconsistent user experiences. After some convincing by a Kissflow product manager, I accepted that real developers need the three-stage (dev, test, production) deployment process implemented for apps, while citizen developers often find that too complicated. I’ve seen similar issues in several contexts, not just with citizen developers but also with data scientists.
As a side effect of Kissflow’s new implementation of apps and transition from Cases and Projects to Boards, its documentation has become at least partially out of date. (Some of the documentation pages still say OrangeTech, which is a clue to their age.) I’m sure that will all be fixed in time. Meanwhile, expect to ask lots of questions as you learn the product.
Cost: Small business: $10/user/month, 50 users minimum, $6,000 billed annually. Corporate: $20/user/month, 100 users minimum, $24,000 billed annually. Enterprise: Get a custom quote. A free trial is available without the need for a credit card. Courses on the Academy range from $50 to $550, payable by credit card.
Platform: Server: Kissflow is hosted on Google Cloud Platform. Client: Chrome 56+, Safari 13.2+, Edge 79+.
Posted by Richy George on 10 August, 2022This post was originally published on this site
In this article, you’ll learn about Node.js and Deno, the differences between CommonJS and ECMAScript modules, using TypeScript with Deno, and faster deployments with Deno Deploy. We’ll conclude with notes to help you decide between using Node.js or Deno for your next development project.
Node’s approach to scaling with callback functions requires less memory to handle more connections than most competitive architectures that scale with threads, including Apache HTTP Server, the various Java application servers, IIS and ASP.NET, and Ruby on Rails.
Much of Node’s utility comes from its large package library, which is accessible from the
npm command. NPM, the Node Package Manager, is part of the standard Node.js installation, although it has its own website.
Node’s module system problem was solved by an update in mid-2022.
npm originally supported. Since then, the ECMAScript committee officially blessed ECMAScript modules, also known as ES modules, which is supported by the
jspm package manager. Deno also supports ES modules.
Experimental support for ES modules was added in Node.js 12.12 and is stable from Node.js 16 forward. TypeScript 4.7 also supports ES modules for Node.js 16.
require statement. The way to load an ECMAScript module is to use an
import statement along with a matching
The latest Node.js has loaders for both CommonJS and ES modules. How are they different? The CommonJS loader is fully synchronous; is responsible for handling
require() calls; supports folders as modules; and tries adding extensions (.js, .json, or .node) if one was omitted from the
require() call. The CommonJS loader cannot be used to load ECMAScript modules. The ES modules loader is asynchronous; is responsible for handling both import statements and
import() expressions; does not support folders as modules (directory indexes such as
It is well known that Deno improves security over Node. Mainly, this is because Deno, by default, does not let a program access disk, network, subprocesses, or environmental variables. When you need to allow any of these, you can opt-in with a command-line flag, which can be as granular as you like; for example,
--allow-net=google.com. Another security improvement in Deno is that it always dies on uncaught errors. Node, by contrast, will allow execution to proceed after an uncaught error, with unpredictable results.
First off, many times, using Node packages from Deno just works. Even better, there are workarounds for many of the common stumbling blocks. These include using the
std/node modules of the Deno standard library to “polyfill” the built-in modules of Node; using CDNs to access the vast majority of
npm packages in ways that work under Deno; and using import maps. Moreover, Deno has a Node compatibility mode starting with Deno 1.15.
On the downside, Node’s plugin system is incompatible with Deno; Deno’s Node compatibility mode doesn’t support TypeScript; and a few built-in Node modules (such as vm) are incompatible with Deno.
If you’re a Node user thinking of switching to Deno, here’s a cheat sheet to help you.
swc. When the code has been type-checked (if checking is enabled) and transformed, it is stored in a cache. In other words, unlike Node.js or a browser, you don’t need to manually transpile your TypeScript for Deno with the
As of Deno 1.23, there is no TypeScript type-checking in Deno by default. Since most developers interact with the type-checker through their editor, type-checking again when Deno starts up doesn’t make a lot of sense. That said, you can enable type-checking with the
--check flag to Deno.
Built on the same modern systems as the Deno CLI, Deno Deploy provides the latest and greatest in web technologies in a globally scalable way:
Deno Deploy has two tiers. The free tier is limited to 100,000 requests per day, 100 GiB data transfer per month, and 10ms CPU time per request. The pro tier costs $10 per month including 5 million requests per month and 100 GiB data transfer, plus $2-per-million additional requests per month and $0.30/GiB data transfer over the included quota; the pro tier allows 50ms CPU time per request.
As you might expect, the answer of which technology is better for your use case depends on many factors. My bottom line: If you have an existing Node.js deployment that isn’t broken, then don’t fix it. If you have a new project that you intend to write in TypeScript, then I’d strongly consider Deno. However, if your TypeScript project needs to use multiple Node.js packages that do not have Deno equivalents, you will need to weigh the Deno project’s feasibility. Starting with a proof-of-concept is pretty much mandatory: It’s hard to predict whether you can make a given Node.js package work in Deno without trying it.
Posted by Richy George on 3 August, 2022This post was originally published on this site
Last year I wrote about eight databases that support in-database machine learning. In-database machine learning is important because it brings the machine learning processing to the data, which is much more efficient for big data, rather than forcing data scientists to extract subsets of the data to where the machine learning training and inference run.
These databases each work in a different way:
Now there’s another database that can run machine learning internally: Snowflake.
Snowflake is a fully relational ANSI SQL enterprise data warehouse that was built from the ground up for the cloud. Its architecture separates compute from storage so that you can scale up and down on the fly, without delay or disruption, even while queries are running. You get the performance you need exactly when you need it, and you only pay for the compute you use.
Snowflake currently runs on Amazon Web Services, Microsoft Azure, and Google Cloud Platform. It has recently added External Tables On-Premises Storage, which lets Snowflake users access their data in on-premises storage systems from companies including Dell Technologies and Pure Storage, expanding Snowflake beyond its cloud-only roots.
Snowflake is a fully columnar database with vectorized execution, making it capable of addressing even the most demanding analytic workloads. Snowflake’s adaptive optimization ensures that queries automatically get the best performance possible, with no indexes, distribution keys, or tuning parameters to manage.
Snowflake can support unlimited concurrency with its unique multi-cluster, shared data architecture. This allows multiple compute clusters to operate simultaneously on the same data without degrading performance. Snowflake can even scale automatically to handle varying concurrency demands with its multi-cluster virtual warehouse feature, transparently adding compute resources during peak load periods and scaling down when loads subside.
When I reviewed Snowflake in 2019, if you wanted to program against its API you needed to run the program outside of Snowflake and connect through ODBC or JDBC drivers or through native connectors for programming languages. That changed with the introduction of Snowpark in 2021.
Snowpark brings to Snowflake deeply integrated, DataFrame-style programming in the languages developers like to use, starting with Scala, then extending to Java and now Python. Snowpark is designed to make building complex data pipelines a breeze and to allow developers to interact with Snowflake directly without moving data.
The Snowpark library provides an intuitive API for querying and processing data in a data pipeline. Using this library, you can build applications that process data in Snowflake without moving data to the system where your application code runs.
The Snowpark API provides programming language constructs for building SQL statements. For example, the API provides a
select method that you can use to specify the column names to return, rather than writing
'select column_name' as a string. Although you can still use a string to specify the SQL statement to execute, you benefit from features like intelligent code completion and type checking when you use the native language constructs provided by Snowpark.
Snowpark operations are executed lazily on the server, which reduces the amount of data transferred between your client and the Snowflake database. The core abstraction in Snowpark is the DataFrame, which represents a set of data and provides methods to operate on that data. In your client code, you construct a DataFrame object and set it up to retrieve the data that you want to use.
The data isn’t retrieved at the time when you construct the DataFrame object. Instead, when you are ready to retrieve the data, you can perform an action that evaluates the DataFrame objects and sends the corresponding SQL statements to the Snowflake database for execution.
Snowpark for Python is available in public preview to all Snowflake customers, as of June 14, 2022. In addition to the Snowpark Python API and Python Scalar User Defined Functions (UDFs), Snowpark for Python supports the Python UDF Batch API (Vectorized UDFs), Table Functions (UDTFs), and Stored Procedures.
These features combined with Anaconda integration provide the Python community of data scientists, data engineers, and developers with a variety of flexible programming contracts and access to open source Python packages to build data pipelines and machine learning workflows directly within Snowflake.
Snowpark for Python includes a local development experience you can install on your own machine, including a Snowflake channel on the Conda repository. You can use your preferred Python IDEs and dev tools and be able to upload your code to Snowflake knowing that it will be compatible.
By the way, Snowpark for Python is free open source. That’s a change from Snowflake’s history of keeping its code proprietary.
The following sample Snowpark for Python code creates a DataFrame that aggregates book sales by year. Under the hood, DataFrame operations are transparently converted into SQL queries that get pushed down to the Snowflake SQL engine.
from snowflake.snowpark import Session
from snowflake.snowpark.functions import col
# fetch snowflake connection information
from config import connection_parameters
# build connection to Snowflake
session = Session.builder.configs(connection_parameters).create()
# use Snowpark API to aggregate book sales by year
booksales_df = session.table("sales")
booksales_by_year_df = booksales_df.groupBy(year("sold_time_stamp")).agg([(col("qty"),"count")]).sort("count", ascending=False)
Snowflake’s “getting started” tutorial demonstrates an end-to-end data science workflow using Snowpark for Python to load, clean, and prepare data and then deploy the trained model to Snowflake using a Python UDF for inference. In 45 minutes (nominally), it teaches:
The task is the classic customer churn prediction for an internet service provider, which is a straightforward binary classification problem. The tutorial starts with a local setup phase using Anaconda; I installed Miniconda for that. It took longer than I expected to download and install all the dependencies of the Snowpark API, but that worked fine, and I appreciate the way Conda environments avoid clashes among libraries and versions.
This quickstart begins with a single Parquet file of raw data and extracts, transforms, and loads the relevant information into multiple Snowflake tables.
Now that we’ve finished the ETL/data engineering phase, we can move on to the data analysis/data science phase.
Finally we create and deploy a user-defined function (UDF) for prediction, using more data and a better model.
You can go into more depth by running Machine Learning with Snowpark Python, a 300-level quickstart, which analyzes Citibike rental data and builds an orchestrated end-to-end machine learning pipeline to perform monthly forecasts using Snowflake, Snowpark Python, PyTorch, and Apache Airflow. It also displays results using Streamlit.
Overall, Snowpark for Python is very good. While I stumbled over a couple of things in the quickstart, they were resolved fairly quickly with help from Snowflake’s extensibility support.
I like the wide range of popular Python machine learning and deep learning libraries and frameworks included in the Snowpark for Python installation. I like the way Python code running on my local machine can control Snowflake warehouses dynamically, scaling them up and down at will to control costs and keep runtimes reasonably short. I like the efficiency of doing most of the heavy lifting inside the Snowflake warehouses using Snowpark. I like being able to deploy predictors as UDFs in Snowflake without incurring the costs of deploying prediction endpoints on major cloud services.
Essentially, Snowpark for Python gives data engineers and data scientists a nice way to do DataFrame-style programming against the Snowflake enterprise data warehouse, including the ability to set up full-blown machine learning pipelines to run on a recurrent schedule.
Cost: $2 per credit plus $23 per TB per month storage, standard plan, prepaid storage. 1 credit = 1 node*hour, billed by the second. Higher level plans and on-demand storage are more expensive. Data transfer charges are additional, and vary by cloud and region. When a virtual warehouse is not running (i.e., when it is set to sleep mode), it does not consume any Snowflake credits. Serverless features use Snowflake-managed compute resources and consume Snowflake credits when they are used.
Platform: Amazon Web Services, Microsoft Azure, Google Cloud Platform.
Copyright 2015 - InnovatePC - All Rights Reserved
Site Design By Digital web avenue