Skip to main content

Column Wise Multiplication of Two matrix in python

Tutorial for Column Wise Multiplication of Two matrix by lists in python

Image result for python listssuppose we want to multiply two lists of different lengths in python how to do it?
Well there are many ways of Doing it
from itertools we can use map but it multiples lists of equal length only
using numpy but creating numpy arrays are not so useful it includes complicated stuff
here ill show you  can do it simply using logic

Suppose i have a 2d array and i want to multiply it column wise

[ 1 2 3  ]     [1]     [1, 4, 7]
[  4 5 6 ] *  [2] =  [2, 5, 8]
[  7 8 9 ]     [3]     [3, 6, 9]

Image result for row by column multiplicationSo in python we create a list inside a list

a=[[1,2,3],[4,5,6],[7,8,9]]
b=[1,2,3]
for row in range(len(a)):
print([a[col][row] for col in range(len(b)) ])


[1, 4, 7]
[2, 5, 8]
[3, 6, 9]

mul=[[1, 2, 3, 4, 0, 0], [0, 1, 2, 3, 4, 0], [0, 0, 1, 2, 3, 4], [4, 0, 0, 1, 2, 3], [3, 4, 0, 0, 1, 2], [2, 3, 4, 0, 0, 1]]
hn=[-3, 2, 1, 0, 0, 0]

so the ans should be
[[-3, 0, 0, 0, 0, 0], [-6, 2, 0, 0, 0, 0], [-9, 4, 1, 0, 0, 0], [-12, 6, 2, 0, 0, 0], [0, 8, 3, 0, 0, 0], [0, 0, 4, 0, 0, 0]]

for i in range(len(xn)):
print([mul[j][i]*hn[j] for j in range(len(hn))])
[-3, 0, 0, 0, 0, 0]
[-6, 2, 0, 0, 0, 0]
[-9, 4, 1, 0, 0, 0]
[-12, 6, 2, 0, 0, 0]
[0, 8, 3, 0, 0, 0]
[0, 0, 4, 0, 0, 0]

Now if we Want to add the ans to another list we simply do
next=[]
for i in range(len(xn)):
next.append( list(mul[j][i]*hn[j] for j in range(len(hn))))
print(next)
[[-3, 0, 0, 0, 0, 0], [-6, 2, 0, 0, 0, 0], [-9, 4, 1, 0, 0, 0], [-12, 6, 2, 0, 0, 0], [0, 8, 3, 0, 0, 0], [0, 0, 4, 0, 0, 0]]

Comments

Popular posts from this blog

Columnar Transposition Cipher

Columnar Transposition Cipher Introduction  The columnar transposition cipher is a fairly simple, easy to implement cipher. It is a transposition cipher that follows a simple rule for mixing up the characters in the plaintext to form the ciphertext. Although weak on its own, it can be combined with other ciphers, such as a substitution cipher, the combination of which can be more difficult to break than either cipher on it's own. The  ADFGVX cipher uses a columnar transposition to greatly improve its security. Example  The key for the columnar transposition cipher is a keyword e.g.  GERMAN . The row length that is used is the same as the length of the keyword. To encrypt a piece of text, e.g. defend the east wall of the castle we write it out in a special way in a number of rows (the keyword here is  GERMAN ): G E R M A N d e f e n d t h e e a s t w a l l o f t h e c a s t l e x x In the above example, the plaintext has been padded so that ...

A Case Study On End-to-End Encryption Used In Whatsapp

A Case Study On End-to-End Encryption Used In Whatsapp 1.Introduction to End-to-End Encryption WhatsApp's end-to-end encryption is available when you and the people you message use the latest versions of the app. WhatsApp's end-to-end encryption ensures only you and the person you're communicating with can read what is sent, and nobody in between, not even WhatsApp. This is because your messages are secured with a lock, and only the recipient and you have the special key needed to unlock and read them. For added protection, every message you send has its own unique lock and key. WhatsApp, since its inception six years ago, has quickly grown into a global phenomenon, becoming one of the most popular mobile based communications applications in the world today. With a user base that eclipsed one billion in February, WhatsApp provides a service that potentially endangers the privacy of over 10% of the entire human population. In order to address these security concern...

Study of Support Vector Machines

Introduction to support vectors In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. What are support vectors Support vectors are the data points that lie closest to the decision surface (or hyperplane) • They are the data points most difficult to classify • They have direct bearing on the optimum location of the decision surface • We can show that the optimal hyperplane stems from the function class with the lowest “capacity”= # of independent features/parameters Theoretical concept SVMs maximize the margin (Winston terminology: the ‘street’) around the separating hyperplane.  • The decision function is fully specified by a (usually very small) subset of training samples, the support vectors.  • This becomes a Quadratic programming problem that is easy to solve by standard methods Separation by Hyperplanes • Assu...