• Home
  • About
  • Contact
  • ado.net
  • angular
  • c#.net
  • design patterns
  • linq
  • mvc
  • .net core
    • .Net Core MVC
    • Blazor Tutorials
  • sql
  • web api
  • dotnet
    • SOLID Principles
    • Entity Framework
    • C#.NET Programs and Algorithms
  • Others
    • C# Interview Questions
    • SQL Server Questions
    • ASP.NET Questions
    • MVC Questions
    • Web API Questions
    • .Net Core Questions
    • Data Structures and Algorithms

Wednesday, August 5, 2020

Master Theorem

 Admin     August 05, 2020     Algorithm, Data Structure     No comments   

In this article, I am going to discuss Master Theorem. Please read our previous article where we discussed Properties of Asymptotic Notations. Here, you will learn what master theorem is and how it is used for solving recurrence relations.

Master Theorem
The master method is a formula for solving recurrence relations of the form:
T(n) = aT(n/b) + f(n),
where,
n = size of input
a = number of subproblems in the recursion
n/b = size of each subproblem. All subproblems are assumed to have the same size. f(n) = cost of the work done outside the recursive call, which includes the cost of dividing the problem and cost of merging the solutions

Here, a ≥ 1 and b > 1 are constants, and f(n) is an asymptotically positive function. An asymptotically positive function means that for a sufficiently large value of n, we have f(n) > 0.

The master theorem is used in calculating the time complexity of recurrence relations (divide and conquer algorithms) in a simple and quick way.

If a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function, then the time complexity of a recursive relation is given by

T(n) = aT(n/b) + f(n)
where, T(n) has the following asymptotic bounds:
1. If f(n) = O(nlogb a-ϵ), then T(n) = Θ(nlogb a).
2. If f(n) = Θ(nlogb a), then T(n) = Θ(nlogb a *log n).
3. If f(n) = Ω(nlogb a+ϵ), then T(n) = Θ(f(n)).
ϵ > 0 is a constant.

Each of the above conditions can be interpreted as:
If the cost of solving the sub-problems at each level increases by a certain factor, the value of f(n) will become polynomially smaller than nlogb a. Thus, the time complexity is oppressed by the cost of the last level ie. nlogb a

If the cost of solving the sub-problem at each level is nearly equal, then the value of f(n) will be nlogb a. Thus, the time complexity will be f(n) times the total number of levels ie. nlogb a* log n

If the cost of solving the subproblems at each level decreases by a certain factor, the value of f(n) will become polynomially larger than nlogb a. Thus, the time complexity is oppressed by the cost of f(n).

Solved Example of Master Theorem
T(n) = 3T(n/2) + n2
Here,
a = 3
n/b = n/2
f(n) = n2
logb a = log2 3 ≈ 1.58 < 2
ie. f(n) < nlogb a+ϵ , where, ϵ is a constant.
Case 3 implies here.
Thus, T(n) = f(n) = Θ(n2)

Master Theorem Limitations
The master theorem cannot be used if:
1. T(n) is not monotone. eg. T(n) = sin n
2. f(n) is not a polynomial. eg. f(n) = 2n
3. a is not a constant. eg. a = 2n
4. a < 1

Master Theorem for Divide and Conquer Recurrences
All divide and conquer algorithms (also discussed in detail in the Divide and Conquer chapter) divide the problem into sub-problems, each of which is part of the original problem, and then perform some additional work to compute the final answer. As an example, a merge sort algorithm [for details, refer to Sorting chapter] operates on two sub-problems, each of which is half the size of the original, and then performs O(n) additional work for merging. This gives the running time equation:

T(n) = 2T(n/2) + O(n)

The following theorem can be used to determine the running time of the divide and conquer algorithms. For a given program (algorithm), first, we try to find the recurrence relation for the problem. If the recurrence is of the below form then we can directly give the answer without fully solving it. If the recurrence is of the form ,T(n) = aT (n/b) + Θ(nklogpn) where a ≥ 1,b >1,k ≥ 0 and p is a real number, then:
Master Theorem

Divide and Conquer Master Theorem: Problems & SolutionsMaster Theorem For each of the following recurrences, give an expression for the runtime T(n) if the recurrence can be solved with the Master Theorem. Otherwise, indicate that the Master Theorem does not apply.
Master Theorem


Master Theorem


Master Theorem

Master Theorem for Subtract and Conquer Recurrences
Master Theorem

A variant of Subtraction and Conquer Master Theorem
The solution to the equation T(n) = T(α n) + T((1 – α)n) + βn, where 0 < α < 1 and β > 0 are constants, is O(nlogn).

Method of Guessing and Confirming
Now, let us discuss a method that can be used to solve any recurrence. The basic idea behind this method is: guess the answer; and then prove it correct by induction.

In other words, it addresses the question: What if the given recurrence doesn’t seem to match with any of these (master theorem) methods? If we guess a solution and then try to verify our guess inductively, usually either the proof will succeed (in which case we are done), or the proof will fail (in which case the failure will help us refine our guess).
Master Theorem
Master Theorem
Master Theorem
Master Theorem
Amortized Analysis
Amortized analysis refers to determining the time-averaged running time for a sequence of operations. It is different from average-case analysis because the amortized analysis does not make any assumption about the distribution of the data values, whereas average case analysis assumes the data are not “bad” (e.g., some sorting algorithms do well on average overall input orderings but very badly on certain input orderings). That is, amortized analysis is a worst-case analysis, but for a sequence of operations rather than for individual operations.

The motivation for amortized analysis is to better understand the running time of certain

techniques, where standard worst-case analysis provides an overly pessimistic bound. Amortized analysis generally applies to a method that consists of a sequence of operations, where the vast majority of the operations are cheap, but some of the operations are expensive. If we can show that the expensive operations are particularly rare we can change them to the cheap operations, and only bound the cheap operations.

The general approach is to assign an artificial cost to each operation in the sequence, such that the total of the artificial costs for the sequence of operations bounds the total of the real costs for the sequence. This artificial cost is called the amortized cost of operation. To analyze the running time, the amortized cost thus is a correct way of understanding the overall running time – but note that particular operations can still take longer so it is not a way of bounding the running time of any individual operation in the sequence.

When one event in a sequence affects the cost of later events:
    One particular task may be expensive. But it may leave data structure in a state that the next few operations become easier.
Example:
Let us consider an array of elements from which we want to find the k the smallest element. We can solve this problem by using sorting. After sorting the given array, we just need to return the kth element from it. The cost of performing the sort (assuming comparison-based sorting algorithm) is O(nlogn). If we perform n such selections then the average cost of each selection is O(nlogn/n) = O(logn). This clearly indicates that sorting once is reducing the complexity of subsequent operations.

In the next article, I am going to discuss Recursion And BackTracking in detail Here, in this article, I try to explain Master Theorem. I hope you enjoy this Master Theorem article. I would like to have your feedback. Please post your feedback, question, or comments about this article.

Summary:
I hope this post will be helpful to understand the concept of Master Theorem
Please share this post with your friends and colleagues.
For any queries please post a comment below.
Happy Coding 😉
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Google+
  •  Stumble
  •  Digg

Properties of Asymptotic Notations

 Admin     August 05, 2020     Algorithm, Data Structure     No comments   

In this article, I am going to discuss Properties of Asymptotic Notations. Please read our previous article where we discussed Asymptotic Notations.

Properties of Asymptotic Notations:
As we have gone through the definition of these three notations let’s now discuss some important properties of those notations.

General Properties :
If f(n) is O(g(n)) then a*f(n) is also O(g(n)) ; where a is a constant.

Example: f(n) = 2n²+5 is O(n²)
then 7*f(n) = 7(2n²+5)
= 14n²+35 is also O(n²)

Similarly, this property satisfies both Θ and Ω notation. We can say
If f(n) is Θ(g(n)) then a*f(n) is also Θ(g(n)); where a is a constant.
If f(n) is Ω (g(n)) then a*f(n) is also Ω (g(n)); where a is a constant.

Reflexive Properties:
If f(n) is given then f(n) is O(f(n)). Example: f(n) = n² ; O(n²) i.e O(f(n))

Similarly, this property satisfies both Θ and Ω notation. We can say
If f(n) is given then f(n) is Θ(f(n)).
If f(n) is given then f(n) is Ω (f(n)).

Transitive Properties :
If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)).
Example: if f(n) = n , g(n) = n² and h(n)=n³
n is O(n²) and n² is O(n³) then n is O(n³)

Similarly, this property satisfies both Θ and Ω notation. We can say
If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n)) .
If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))

Symmetric Properties :
If f(n) is Θ(g(n)) then g(n) is Θ(f(n)) .
Example: f(n) = n² and g(n) = n² then f(n) = Θ(n²) and g(n) = Θ(n²)
This property only satisfies for Θ notation.

Transpose Symmetric Properties :
If f(n) is O(g(n)) then g(n) is Ω (f(n)).
Example: f(n) = n , g(n) = n² then n is O(n²) and n² is Ω (n)
This property only satisfies for O and Ω notations.

Some More Properties :
1. If f(n) = O(g(n)) and f(n) = Ω(g(n)) then f(n) = Θ(g(n))
2. If f(n) = O(g(n)) and d(n)=O(e(n))
then f(n) + d(n) = O( max( g(n), e(n) ))

Example: f(n) = n i.e O(n)
d(n) = n² i.e O(n²)
then f(n) + d(n) = n + n² i.e O(n²)

3.If f(n)=O(g(n)) and d(n)=O(e(n))
then f(n) * d(n) = O( g(n) * e(n) )

Example: f(n) = n i.e O(n) d(n) = n² i.e O(n²)
then f(n) * d(n) = n * n² = n³ i.e O(n³)

Commonly used Logarithms and Summations:
Properties of Asymptotic Notations

Commonly used Logarithms and Summations

In the next article, I am going to discuss Master Theorem. Here, in this article, I try to explain the Properties of Asymptotic Notations. I hope you enjoy these Properties of Asymptotic Notations article. I would like to have your feedback. Please post your feedback, question, or comments about this article.

Summary:
I hope this post will be helpful to understand the Properties of Asymptotic Notations
Please share this post with your friends and colleagues.
For any queries please post a comment below.
Happy Coding 😉
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Google+
  •  Stumble
  •  Digg

Asymptotic Notation

 Admin     August 05, 2020     Algorithm, Data Structure     No comments   

In this article, I am going to discuss Asymptotic Notation. Please read our previous article where we discussed the Analysis of Algorithm and why it is important to analyze the algorithm. At the end of this article, you will understand the following pointers in detail with examples.
  1. What are Asymptotic Notations?
  2. Big-O Notation
  3. Omega-Q Notation
  4. Theta-Θ Notation
  5. Why is it called Asymptotic Analysis?
Asymptotic Notation
Having the expressions for the best, average, and worst cases, for all three cases we need to identify the upper and lower bounds. To represent these upper and lower bounds, we need some kind of syntax, and that is the subject of the following discussion. Let us assume that the given algorithm is represented in the form of function f(n).

The following 3 asymptotic notations are mostly used to represent the time complexity of algorithms.

Big-O Notation [Upper Bounding Function]
This notation gives the tight upper bound of the given function. Generally, it is represented as f(n)= O(g(n)). That means, at larger values of n, the upper bound of f(n) is g(n). For example, if f(n)= n4 + 100n2 + 10n + 50 is the given algorithm, then n4 is g(n). That means g(n) gives the maximum rate of growth for f(n) at larger values of n.
Big-O Notation [Upper Bounding Function]

O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 <= f(n) <= c*g(n) for all n >= n0 }.
The statement f(n) = O(g(n)) states only that g(n) is an upper bound f(n) for all n , n >= n0 .
Note: In below Big- 0 Examples n0 denoted by n0.

Big-O Examples

Example-1 Find upper bound for f(n) = 3n + 8
Solution: 3n + 8 ≤ 4n, for all n ≥ 8
3n + 8 = O(n) with c = 4 and n0 = 8

Example-2 Find upper bound for f(n) = n2 + 1
Solution: n2 + 1 ≤ 2n2 , for all n ≥ 1
n2 + 1 = O(n2) with c = 2 and n0 = 1

Example-3 Find upper bound for f(n) = n4 + 100n2 + 50 Solution: n4 + 100n2 + 50 ≤ 2n4, for all n ≥ 11
n4 + 100n2 + 50 = O(n4) with c = 2 and n0 = 11

Example-4 Find upper bound for f(n) = 2n3 – 2n2
Solution: 2n3 – 2n2 ≤ 2n3, for all n > 1
2n3 – 2n2 = O(n3) with c = 2 and n0 = 1

Example-5 Find upper bound for f(n) = n
Solution: n ≤ n, for all n ≥ 1
n = O(n) with c = 1 and n0 = 1

Example-6 Find upper bound for f(n) = 410 Solution: 410 ≤ 410, for all n > 1
410 = O(1) with c = 1 and n0 = 1

No Uniqueness?
There is no unique set of values for n0 and c in proving the asymptotic bounds. Let us consider, 100n + 5 = O(n). For this function, there are multiple n0 and c values possible.

Solution1: 100n + 5 ≤ 100n + n = 101n ≤ 101n, for all n ≥ 5, n0 = 5 and c = 101 is a solution. Solution2: 100n + 5 ≤ 100n + 5n = 105n ≤ 105n, for all n > 1, n0 = 1 and c = 105 is also a solution.

Omega-Q Notation [Lower Bounding Function]
Similar to the O discussion, this notation gives the tighter lower bound of the given algorithm and we represent it as f(n) = Ω(g(n)). That means, at larger values of n, the tighter lower bound of f(n) is g(n). For example, if f(n) = 100n2 + 10n + 50, g(n) is Ω(n2).
Omega-Q Notation [Lower Bounding Function]

For a given function g(n), we denote by Ω(g(n)) the set of functions. Ω (g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= c*g(n) <= f(n) for all n >= n0}.

Omega (Ω) Examples
Example-1 Find lower bound for f(n) = 5n2.
Solution: ∃ c, n0 Such that: 0 ≤ cn2≤ 5n2 ⇒ cn2 ≤ 5n2 ⇒ c = 5 and n0 = 1 5n2 = Ω(n2) with c = 5 and n0 = 1

Example-2 Prove f(n) = 100n + 5 ≠ Ω(n2).
Solution: ∃ c, n0 Such that: 0 ≤ cn2 ≤ 100n + 5 => 100n + 5 ≤ 100n + 5n(∀n ≥ 1) = 105n => cn2 ≤ 105n ⇒ n(cn – 105) ≤ 0 Since n is positive ⇒cn – 105 ≤0 ⇒ n ≤105/c ⇒ Contradiction: n cannot be smaller than a constant

Theta-Θ Notation [Order Function]
The theta notation bound functions from above and below, so it defines exact asymptotic behavior. A simple way to get Theta notation of an expression is to drop low order terms and ignore leading constants. For example, consider the following expression.

3n3 + 6n2 + 6000 = Θ(n3)

Dropping lower order terms is always fine because there will always be a n0 after which Θ(n3) has higher values than Θn2) irrespective of the constants involved.
Theta-Θ Notation [Order Function]

For a given function g(n), we denote Θ(g(n)) is following set of functions.

Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}

The above definition means, if f(n) is theta of g(n), then the value f(n) is always between c1*g(n) and c2*g(n) for large values of n (n >= n0). The definition of theta also requires that f(n) must be non-negative for values of n greater than n0.

Theta (Θ) Examples

Example 1 Prove n ≠ Θ(n2)
Solution: c1n2 ≤ n ≤ c2n2 ⇒ only holds for: n ≤ 1/c1
n ≠ Θ(n2)

Example 2 Prove 6n3 ≠ Θ(n2)
Solution: c1n2≤ 6n3 ≤ c2n2 ⇒ only holds for n ≤ c2/6
6n3 ≠ Θ(n2)

Example 3 Prove n ≠ Θ(logn)
Solution: c1logn ≤ n ≤ c2logn ⇒ c2 ≥ n/log n, ∀ n ≥ n0 – Impossible

Important Notes
For analysis (best case, worst case, and average), we try to give the upper bound (O) and lower bound (Ω) and average running time (Θ). From the above examples, it should also be clear that, for a given function (algorithm), getting the upper bound (O) and lower bound (Ω) and average running time (Θ) may not always be possible. For example, if we are discussing the best case of an algorithm, we try to give the upper bound (O) and lower bound (Ω) and average running time(Θ).

In the remaining chapters, we generally focus on the upper bound (O) because knowing the lower-bound (Ω) of an algorithm is of no practical importance, and we use the Θ notation if the upper bound (O) and lower bound (Ω) are the same.

Why is it called Asymptotic Analysis?
From the discussion above (for all three notations: worst case, best case, and average case), we can easily understand that in every case for a given function f(n) we are trying to find another function g(n) which approximates f(n) at higher values of n. That means g(n) is also a curve which approximates f(n) at higher values of n.

In mathematics, we call such a curve an asymptotic curve. In other terms, g(n) is the asymptotic curve for f(n). For this reason, we call algorithm analysis asymptotic analysis.

In the next article, I am going to discuss Properties of Asymptotic Notations. Here, in this article, I try to explain Asymptotic Notation. I hope you enjoy this Asymptotic Notation article. I would like to have your feedback. Please post your feedback, question, or comments about this article.

Summary:
I hope this post will be helpful to understand the concept of Asymptotic Notation
Please share this post with your friends and colleagues.
For any queries please post a comment below.
Happy Coding 😉
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Google+
  •  Stumble
  •  Digg

Analysis of Algorithm

 Admin     August 05, 2020     Algorithm, Data Structure     No comments   

In this article, I am going to discuss the Analysis of Algorithm as well as why it is important to Analysis the Algorithm. Please read our previous article where we gave a brief introduction to Data Structure and Algorithm. At the end of this article, you will understand the following pointers in detail.
  1. Why Analysis of Algorithm?
  2. The goal of Analysis of Algorithms.
  3. What is Running Time Analysis?
  4. How to compare Algorithms?
  5. Does Asymptotic Analysis always work?
  6. What is the Rate of Growth?
  7. Types of Analysis
Why Analysis of Algorithm?
Algorithm analysis helps to determine the best among others in terms of time and space consumed. For example:- Going from one place to another, there are various ways to travel. Like Bus, Train, Flight, Car, etc.

But we will select the best mode which is cost-efficient and time-consuming, depends on the situation. Similarly, In computer science to sort an array there are various ways or algorithms like insertion sort, selection sort, quick sort, merge sort, etc.

The goal of Analysis of Algorithms
the Goal of analysis of algorithms is to compare algorithms (for solutions) mainly in terms of running time but also in terms of other factors (e.g. memory, developers effect, etc.)

What is Running Time Analysis?
It is the processing time vs size of the input. The input may be of different types based on problems. Commons inputs are
  1. Size of array
  2. Number of elements in a matrix
  3. Polynomial degree
  4. Numbers of bits in binary representation
  5. Edges and Vertices in a graph
How to compare Algorithms?
One native way of doing this is – implement both the algorithms and run the two programs on your computer for different inputs and see which one takes less time. There are many problems with this approach to the analysis of algorithms.
  1. It might be possible that for some inputs, the first algorithm performs better than the second. And for some inputs, second performs better.
  2. It might also be possible that for some inputs, first algorithm perform better on one machine and the second works better on other machines for some other inputs.
Asymptotic Analysis is a big idea that handles the above issues in analyzing algorithms. In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size (we don’t measure the actual running time). We calculate, how the time (or space) taken by an algorithm increases with the input size.

This kind of comparison is independent of machine time, programming type, etc.

Example to understand this concept:
For example, let us consider the search problem (searching a given item) in a sorted array. One way to search is Linear Search (order of growth is linear) and the other way is Binary Search (order of growth is logarithmic).

To understand how Asymptotic Analysis solves the above-mentioned problems in analyzing algorithms, let us say we run the Linear Search on a fast computer A and Binary Search on a slow computer B and we pick the constant values for the two computers so that it tells us exactly how long it takes for the given machine to perform the search in seconds.

Let’s say the constant for A is 0.2 and the constant for B is 1000 which means that A is 5000 times more powerful than B. For small values of input array size n, the fast computer may take less time. But, after a certain value of input array size, the Binary Search will definitely start taking less time compared to the Linear Search even though the Binary Search is being run on a slow machine.

The reason is the order of growth of Binary Search with respect to input size is logarithmic while the order of growth of Linear Search is linear. So the machine-dependent constants can always be ignored after a certain value of input size.

Here are some running times for this example:
Linear Search running time in seconds on A: 0.2 * n
Binary Search running time in seconds on B: 1000*log(n)

Analysis of Algorithm

Does Asymptotic Analysis always work?
Asymptotic Analysis is not perfect, but that’s the best way available for analyzing algorithms. For example, say there are two sorting algorithms that take 1000nLogn and 2nLogn time respectively on a machine. Both of these algorithms are asymptotically the same (order of growth is nLogn). So, With Asymptotic Analysis, we can’t judge which one is better as we ignore constants in Asymptotic Analysis.

Also, in Asymptotic analysis, we always talk about input sizes larger than a constant value. It might be possible that those large inputs are never given to your software and an algorithm that is asymptotically slower, always performs better for your particular situation. So, you may end up choosing an algorithm that is Asymptotically slower but faster for your software.

What is the Rate of Growth? The rate at which running time increases as a function of input is called the rate of growth.

We know that for the growth of a function, the highest order term matters the most e.g., the term c1n2 in the function c1n2+c2n+c3 and thus we can neglect the other terms and even the coefficient of the highest order term i.e., c1 (assuming coefficients are neither too large nor too small).

Even with these approximations, we will be able to know about the rate of the growth of our function and this is enough information to keep in our mind while developing an algorithm.

Commonly used rate of Growths in decreasing order
What is the Rate of Growth?

Commonly used rate of Growths in decreasing order

Commonly used functions and Their comparison:
Commonly used functions and Their comparison

and the problem
Types of Analysis

Types of Analysis
To analyze the given algorithm we need to know on what inputs the algorithm is taking less time (performing well) and on what inputs the algorithm is taking a huge time. We know that an algorithm can be represented in the form of expression. That means we represent the algorithm with multiple expressions: one for the case where it is taking less time and others for the case where it is taking more time. In general-

Worst Case:- Defines the input for which the algorithm takes a huge time. Input is the one for which the algorithm runs slower.

Best Case:- Defines the input for which algorithm takes the lowest time. Input is the one for which algorithm works fastest.

Average Case:- Provides a prediction about the running time of the algorithm. Assumes that the input is random.

Lower Bound <= Average Time <= Upper Bound
For a given algorithm, we can represent best, worst, and average cases in the form of expression.

Example:- f(n) = n2 + 500, for worst-case
f(n) = n + 100n + 500, for best case

Similarly, we can define the average case too.

In the next article, I am going to discuss Asymptotic Notation. Here, in this article, I try to explain the Analysis of Algorithm. I hope you enjoy this Introduction Analysis of Algorithm article. I would like to have your feedback. Please post your feedback, question, or comments about this article.

Summary:
I hope this post will be helpful to understand the Analysis of Algorithm
Please share this post with your friends and colleagues.
For any queries please post a comment below.
Happy Coding 😉
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Google+
  •  Stumble
  •  Digg

Sunday, August 2, 2020

Introduction to Data Structure and Algorithm

 Admin     August 02, 2020     Algorithm, Data Structure     No comments   

In this article, I am going to give you a brief introduction to Data Structure and Algorithm. At the end of this article, you will understand what is Data Structure and Algorithm, its type, and why we need data structure and algorithm as well as Variables and Data Types.

Variables:
let's take an simple mathematical equation.
x + y – 10 = 20 eq1.1
For now, we don’t need to worry about the equation. As we all can observe, the eq1.1 has some names which hold values(data). Similarly, in programming something required to hold data, and we called them variables. In eq1.1, x & y are variables which holds some data.

Data Types:
In eq1.1, x & y holds some data and the data may be integers or floats like 0,1, 0.5, -6.4 , -2 etc. A data type is a set of some predefined values like Integers, Floating points, character, strings etc. There are two types of data types.
  1. System – defined Data types (also called as Primitive data types)
  2. User – defined Data types
System Defined Data Types
Primitive data type defined by the system .It is provided by programming language. Example – int, float, char, bool, etc.

Allocation of bits for each primitive data type depends on programming language, operating system, and compiler. Different programming language may allocate different size for the same primitive data type.

User-Defined Data Types
In many cases to solve a problem, system-defined data types are not enough. To overcome this situation many programming languages allow the users to define their own data type called user define data types.

Data Structures in C/C++ and classes in java are good examples. Generally, user-defined data type is a combination of much primitive data type.
struct usrdef {
     int sysdef1;
    float sysdef2;
    bool sysdef3;
};
Data Structure
In computer science, a data structure is a data organization, management, and storage format that enables efficient access and modification. More precisely, a data structure is a collection of data values, the relationships among them, and the functions or operations that can be applied to the data.

In simple words, a data structure is a way of storing and organizing data so that it can be used efficiently.

Types of Data Structure:
Data Structure classified mainly into two types.
Linear Data Structure:- A Linear data structure have data elements arranged in a sequential manner and each member element is connected to its previous and next element.
Example:- LinkedList, stacks, queue, trees, graphs, etc.

Non-Linear Data Structures:- A non-linear data structure has no set sequence of connecting all its elements and each element can have multiple paths to connect to other elements.
Example:- Trees, Graphs.

For a better understanding of the classification of data structure please have a look at the following diagram.
Types of Data Structure

Key Differences between Linear and Non-Linear Data Structure
Differences between Linear and Non - Linear Data Structure

Advantages of Data Structures or why we need Data Structure?
We need data structures, few of them are as follows:
  1. They are essential ingredients in creating a fast and powerful algorithm.
  2. They help to manage and organize data.
  3. They make code cleaner and easier to understand.
Abstract Data Types
Basic operation like addition, multiplication, division, subtraction supported by primitive data type but for the user-defined data type, we need to define operations. such operations can be defined at the time of implementation.

In general, user-defined data types are defined along with their operations. The combination of data structures along with their operations is called Abstract Data Types (ADTs). It mainly consists of two parts.
  1. Declaration of data
  2. Declaration of operations
What is an Algorithm?
An algorithm is a set of one – by – one instruction to solve a particular problem.

Example:- Adding two numbers given by the users
Step 1: Start
Step 2: Declare variables no1, no2, and summation.
Step 3: Read values no1 and no2 that is provided by the user.
Step 4: Add no1 and no2 and assign the result to summation.
             summation ← num1 + num2
Step 5: Display summation
Step 6: Stop
Properties of a good algorithm
  1. Input and output should be clearly defined.
  2. Each step in the algorithm should be clear and unambiguous.
  3. Algorithms should be most effective among many different ways to solve a problem.
The algorithm should be written in such a way that it can be used in different programming languages. No coding is allowed.

In the next article, I am going to discuss the Analysis of Algorithm in detail. Here, in this article, I try to give a brief introduction to Data Structure and Algorithm and I hope you enjoy this article. I hope you enjoy this Introduction to Data Structure and Algorithm article. I would like to have your feedback. Please post your feedback, question, or comments about this article.

Summary:
I hope this post will be helpful to understand the Introduction to Data Structure and Algorithm
Please share this post with your friends and colleagues.
For any queries please post a comment below.
Happy Coding 😉
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Google+
  •  Stumble
  •  Digg

Blazor Project Structure

 Admin     August 02, 2020     .Net, .Net Core, Blazor, C#     No comments   

In this article, I am going to discuss Blazor Project Structure in detail. Please read our previous article, where we discussed the Blazor Hosting Models. At the end of this article, you will understand the need and use of different files and folders of the Blazor application.

ASP.NET Core Blazor Project Structure:
In part 3 of this article series, we created one solution with two projects. The following is the structure.
ASP.NET Core Blazor Project Structure

Now, let us discuss the files and folders one by one.

Program.cs
This file contains the Main() method which is the entry point for both the project types (i.e Blazor WebAssembly and Blazor Server).

In a Blazor server project, the Main() method calls CreateHostBuilder() method which sets up the ASP.NET Core host.

In a Blazor WebAssembly project, the App component, which is the root component of the application, is specified in the Main method. This root component is present in the root project folder in the App.razor file.

Components are the building blocks of a Blazor application. We will discuss everything you need to know to build effective and reusable blazor components in our upcoming videos. For now, just understand the App component is the root component of the application.

wwwroot
For both the project types, this folder contains static files like images, stylesheets, etc.

App.razor
This is the root component of the application. It uses the built-in Router component and sets up client-side routing. It is this Router component that intercepts browser navigation and renders the page that matches the requested address. The Router uses the Found property to display the content when a match is found. If a match is not found, the NotFound property is used to display the message – Sorry, there’s nothing at this address.

Pages folder
This folder contains the _Host razor page and the routable components that make up the Blazor app. The components have the .razor extension.
    Index component (Index.razor) – Rendered when we navigate to the root application URL. Counter component (Counter.razor) – Rendered when we navigate to the path /counter. FetchData component (FetchData.razor) – Rendered when we navigate to the path /fetchdata. Error component (Error.razor) – Rendered when an unhandled exception occurs in the blazor app.
Shared folder
As the name implies, contains the shared components

MainLayout component (MainLayout.razor)
The application’s main layout component

NavMenu component (NavMenu.razor)
Implements the navigation menu on the sidebar. NavLink component renders navigation links to other Razor components like the index, counter, and fetchdata components. This NavLink component is intelligent enough to highlight the navigation menu item if it’s component is currently displayed.

_Imports.razor
This is like _ViewImports.cshtml file in an asp.net core MVC project. This file contains the common namespaces so we do not have to include them in every razor component.

wwwroot/index.html
This is the root page in a Blazor WebAssembly project and is implemented as an HTML page. When a first request hits the application, it is this page, that is initially served. It has the standard HTML, HEAD, and BODY tags. It specifies where the root application component App.razor should be rendered. You can find this App.razor root component in the root project folder. It is included on the page as an HTML element.

This index.html page also loads the blazor WebAssembly JavaScript file (_framework/blazor.webassembly.js). It is this file that is responsible for downloading

The compiled blazor application, it’s dependencies, and the .NET runtime. It also initializes the runtime to run the blazor app in the browser

Startup.cs
This file is present only in the Blazor Server project. As the name implies it contains the applications’ startup logic. The Startup class contains the following two methods.

ConfigureServices – Configures the applications DI i.e dependency injection services. For example, AddServerSideBlazor() method adds Blazor server-side services. On the IServiceCollection interface, there are several methods that start with the word Add. These methods add different services for the Blazor application. We can even add our own services to the DI container. We will see this in action in our upcoming videos.

Configure – Configures the app’s request processing pipeline. Depending on what we want the Blazor application to be capable of doing we add or remove the respective middleware components from the request processing pipeline. For example, UseStaticFiles() method adds the middleware component that can serve static files like images, CSS, etc.

MapBlazorHub sets up the endpoint for the SignalR connection with the client browser.

Pages/_Host.cshtml
This is the root page of the application and is specified by calling MapFallbackToPage(“/_Host”) method. It is implemented as a razor page.

It is this page, that is initially served when a first request hits the application. It has the standard HTML, HEAD, and BODY tags. It also specifies where the root application component, App component (App.razor) must be rendered. Finally, it also loads the blazor.server.js JavaScript file, which sets up the real-time SignalR connection between the server and the client browser. This connection is used to exchange information between the client and the server. SignalR is a great framework for adding real-time web functionality to apps.

The data folder (Blazor Server)
Contains code files related to the sample WeatherForecast service

appsettings.json (Blazor Server)
Just like an asp.net core MVC project, a blazor project also uses this file to store the configuration settings.

Here, in this article, I try to explain the Blazor Project Structure in Detail. I hope now you got some idea regarding the files and folders of the ASP.NET Core Blazor App.

Summary:
I hope this post will be helpful to understand the Blazor Project Structure
Please share this post with your friends and colleagues.
For any queries please post a comment below.
Happy Coding 😉
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Google+
  •  Stumble
  •  Digg

Blazor Hosting Models

 Admin     August 02, 2020     .Net, .Net Core, Blazor, C#     No comments   

In this article, I am going to discuss Blazor Hosting Models in detail. Please read our previous article, where we discussed the step by step process to develop Blazor app using visual studio 2019. As we already discussed in our previous article that Blazor has two hosting models. They are as follows:
  1. Client-side hosting model
  2. Server-side hosting model
The Blazor Server App template is used to create a blazor application with a server-side hosting model. On the other hand, the Blazor WebAssembly App template creates a blazor application with a client-side hosting model. At the end of this article, you will understand the following pointers in detail.
  1. Understand the Client-side hosting model (Blazor WebAssembly App)
  2. Understand the Server-side hosting model (Blazor Server App)
  3. Difference between these two hosting models.
  4. Advantages and disadvantages of each hosting models
Blazor WebAssembly Hosting Model:
In Blazor WebAssembly Hosting Model, the application directly runs in the browser with the help of WebAssembly. So, the things which are required to run a .net application such as the compiled application code, its dependencies, and the most important .NET runtime are downloaded to the client browser by WebAssembly from the server as shown in the below image.
Blazor WebAssembly Hosting Model

It is also possible that a Blazor WebAssembly application can run entirely on the client browser without making a connection to the server or we can also configure it to interact with the server using Restful Web API service calls or using SingleR.

Advantages of Blazor WebAssembly Hosting Model:
As we already discussed a Blazor WebAssembly application can run entirely on the client browser. Once the application is downloaded, then a connection to the server is not required. That means if the network connection to the server is lost, then also the client app can continue to function. So, there is no need for the server to up and running 24X7.

We do not require a full-blown ASP.NET Core web server to host our application. We just need a server somewhere, that can deliver the application to the client browser. That means you can host the application on your own server on the internet somewhere, in the cloud, on Azure as a static website, or even on a CDN (Content Delivery Network).

With code running on the client’s machine it means the server load is significantly reduced.

Dis-Advantages of Blazor WebAssembly Hosting Model:
The blazor.webassembly.js file bootstraps the client application. That means it will download all the required .NET DLL assemblies, its dependencies, .NET Runtime which makes the first request to takes a longer time. If the same client visits the application again, it usually loads the page fast because the browser caches the files.

As the application runs entirely on the client browser, it is restricted to the capabilities of the browser.

Depending on the nature of the application, capable client hardware and software is required. From a software standpoint, for example, at least a browser with WebAssembly support is required.

Blazor Server Hosting Model:
With Blazor Server Hosting Model, the application is run on the server. Between the client browser and the server, a SignalR connection is established. When an event occurs on the client such as a button click, the information about the event is sent to the server over the SignalR connection.

The server handles the event and for the generated HTML a difference is calculated. The entire HTML is not sent again to the client, it’s only the difference that is sent to the client over the SignalR connection. The browser then updates the UI. Since only the difference is applied to update the UI, the application feels faster and more responsive to the user.
Blazor Server Hosting Model

Advantages of Blazor Server Hosting Model:
The application loads much faster as the download size is significantly smaller than a Blazor WebAssembly app.

Since the app runs on the server, it can take full advantage of server capabilities.

All the client needs, to use the app is a browser. The Blazor server-side apps even work on older browsers as there is no requirement for Web Assembly, only HTML, and JavaScript. As the code executes on the server, it is also possible to debug our .NET code in Visual Studio.

Disadvantages of Blazor Server Hosting Model:
Blazor server-side sets up an in-memory session for the current client and uses SignalR connection to communicate between the .NET running on the server and the client’s browser. All memory and CPU usage comes at a cost to the server, for all users. It also means that the client is tied to the server that first served it, so doesn’t work with load-balancing.

An active connection to the server is always required. This means there is a need to keep the server up and running 24X7. If the server is down, the application stops working.

As every user interaction involves a round trip to the server a higher latency usually exists when compared with Blazor WebAssembly hosting.

Scalability can be challenging especially for the apps that have many users as the server must manage multiple client connections and handle client states. However, we can overcome this scalability issue, by using Azure SignalR Service with a Blazor Server app. This service allows a Blazor Server app to scale really well by supporting a large number of concurrent SignalR connections.

Note:
One very important point to keep in mind is, Blazor Server and Blazor WebAssembly are just 2 different ways we can host a Blazor application.

In the next article, I am going to discuss the Folder and File structure of the Blazor Server App and Blazor WebAssembly App in detail. Here, in this article, I try to explain the Blazor Hosting Models in detail and I hope you enjoy this article.

Summary:
I hope this post will be helpful to understand the concept of Blazor Hosting Models
Please share this post with your friends and colleagues.
For any queries please post a comment below.
Happy Coding 😉
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Google+
  •  Stumble
  •  Digg
Newer Posts Older Posts

Join us on Telegram

Loved Our Blog Posts? Subscribe To Get Updates Directly To Your Inbox

Like us on Facebook

Popular Posts

  • What is Dependency Injection(DI)
    Hi friends! Today we are going to learn about Dependency Injection and in our last session we have come across Static classes and where it s...
  • Navigation Menus in ASP.NET Core
    In this article, I am going to discuss how to create Responsive Navigation Menus in ASP.NET Core Application using bootstrap and JQuery. P...
  • What is Abstract Class and When we should use Abstract Class
    Hi friends! In our previous sessions we have seen  Difference Between Class and Struct . And in our last session  we learnt Usability of Sec...
  • ASP.NET State Management
    State management is a technique or way to maintain / store the state of an Asp.Net controls, web page information, object/data, and user in ...
  • ASP.NET Web API Basic Authentication
    In this article, I am going to discuss how to implement the ASP.NET Web API Basic Authentication step by step with an example. Please read...
  • Refresh Token in Web API
    In this article, I am going to discuss how to implement Refresh Token in Web API by validating the clients, as well as I, will also discus...
  • HTTP Client Message Handler in Web API
    In this article, I am going to discuss HTTP Client Message Handler in Web API with real-time examples. As we already discussed in HTTP Mes...

Blog Archive

Contact Form

Name

Email *

Message *

Tags

.Net .Net Core .Net Core MVC Algorithm Angular Anonymous Types Asp.Net Asp.Net MVC Blazor C# Data Structure Database Design Patterns Entity Framework Entity Framework Core Filters Interview Question Management Studio Programming Programs SQL Server SSMS Web API

Copyright © C# Techtics | All Right Reserved.

Protected by Copyscape