Pytorch flatten before linear. Here’s what you’ll need: Python (Version 3.
Pytorch flatten before linear Here are a few of them along with their solutions: 文章浏览阅读2. nn构建卷积神经网络 卷积层nn. I don’t think that last line is causing the issue as there are several linear layers before and after the piece of code I provided that are not causing any issue. Due to my CUDA version being 8, I am using torch 1. 使用教程来自小土堆pytorch教程; 配置环境:torch2. Module):: Defines a custom layer by inheriting from nn. randn(32, 3, 60, 60), where 32 is the batch_size, 3 is the input num_channels and 60x60 is the dimension of the images. Tensor. For VGG11 you would be missing the torch. Embedding, nn. PyTorchのtorch. in pytorch implementation of the fully The basic idea is I want to implement a multi-linear tensor map between n-dimensional tensors that obviates the need to flatten and unflatten tensors. Currently, I am working with a CNN where there is a fully connected layer attached to it and I am working with a 3 channel image of size 32x32. Flatten(start_dim=1,end_dim=-1)。参数说明:start_dim:从哪个维度开始展平。默认值是1,这意味着展平会从第一个维度(即除了 Rewrapping the modules in an nn. view(out. LSTM and nn. In PyTorch, torch. Since the argument t can be any tensor, we pass -1 as the second argument to the reshape() function. This is to make sure that calls like model. flatten()的功能还是比较单调,reshape功能更加强 Before using the linear or the flatten layer, you run the model on a dummy sample by passing say torch. size(0), -1) before feeding the activations to the linear layer. Linear(in_features=2304, out_features=512), => Can you please help me understand how the input features are 2304, (instead of 6400)? Following is the steps of Conv2d and max2d done (twice) before passing to the linear. // `_flatten_nd_linear` flattens all but the last dimension of the input tensor // before passing it to linear operation static inline Tensor _flatten_nd_linear ( const Tensor& input, const Tensor& weight, const Tensor& bias) { Methods to Flatten Tensors in PyTorch. As I warned, you need to flatten the output from the last convolutional layer before you can pass it through a regular "dense" layer (or what pytorch calls a linear layer). Whats new in PyTorch tutorials. squeeze() return t . I appreciate the help. Module. nn. Equalize the histogram of an image by applying a non-linear mapping to the input in order to create a uniform distribution of grayscale values in the При установлении параметра padding_idx=0, слово с нулевым индексом всегда будет имеет нулевые компоненты. reshape it into a shape of (batch,channels,height,width) and then pass it to convolutions, but that method has more steps and for me personally just feels harder This repo contains the official PyTorch code and pre-trained models for FLatten Transformer (ICCV 2023). 학습을 위한 장치 얻기¶. PyTorch nn linear. 2k次,点赞6次,收藏24次。线性层中Linear函数可以将输入的样本大小,输出成我们需要的大小,在构建神经网络是经常会使用到,torch. Sequential(*list(resnetk. Usually it should be batch_size, node_num*attribute_num as you need to match the input to the output. get_trace(). For use with Sequential, see torch. From the description of lasagne’s InverseLayer, it uses the derivative, so essantially, it effectively provides the backpropagation step of the layer it is based on. flatten() method is used to flatten the tensor into a one-dimensional tensor by reshaping them. Flatten、nn. In test pharse, I print output after linear function as above image. 1+cu118与对应torchaudio和torchvision. flatten(),可以将数据展成一维的,相比较reshape函数,使用更加方便。但总的来说torch. flatten applied as: torch. squeeze(x) just before your call to self. 背景知识. Replace x = Torch. therefore we have to map each feature [value] in the last metric into the fully-connected layer follows. query_chw (flat_inputs) Return Channel, Height, and Width. Module): def __init__(self, embed_size, hidden_size, output_size, dropout_rate, nn. My network architecture is shown below, here is my reasoning using the calculation as explained here. By defining these parameters, users can tailor the flattening process to target specific sections of the tensor while preserving other dimensions PyTorch Flatten is used to reshape any tensor with different dimensions to a single dimension so that we can do further operations on the same input data. My CNN is defined as such: #defining the model with an object oriented approach from torch import nn, cuda from torch. I’m training and looking like the model converged well due to loss, acc decrease gradually. Then I first reshape/flatten before passing to the two dense layers. ReLU、nn. Flatten(). I would additionally recommend to add an activation function between the linear layers. Note that some models are using the functional API in its forward, which could break the model if you just slice the children and add them into nn. flatten is a function used to reshape a tensor into a one-dimensional (flat) tensor. Note that you can add a global operation (like global max/average pooling) just before your view layer, so that you know precisely the number of inputs that the linear layer will receive (as you can see in the resnet model definition, where the kernel size for the pooling can be computed on the fly using the functional interface). End-to-end solution for enabling on-device inference capabilities across mobile and edge devices Assuming wrapping the model into the nn. features(x) x = torch. And we will cover these topics. View command is used to flatten the convolution layer features into a single dimension by handling batch size. c1(t1) t = self. I’m attempting to use a CNN to extract features and then input that into a FC network that outputs 2 variables. Sequential container works fine, the code looks alright. I want to use these components to create an encoder-decoder network for seq2seq model. torch. After the second dense layer, I again reshape it to 7x7x30. Linear(16 * 14 * 14, 10) ) Custom Flatten Class. max on the model output and are storing the indices in pred. flatten() function is the most straightforward way to flatten a tensor in PyTorch. autograd. This is a common practice in architectures like CNNs. To make the two align you need to "stack" the 3 dimensions [n_features_conv, height, width] into one [n_features_lin]. 1672]] ‘’’) I am wondering on if there is a consistent formula I can use to calculate the input dimensions of the first linear layer with the input from the last conv/maxpooling layer. While using nn. All three are identical and share the same implementation, the only Run PyTorch locally or get started quickly with one of the supported cloud platforms. Usually you would use out = out. view(batch_size, -1), Hi, I am trying to understand how to process batches in an nn. 2025-03-12 . Linear function is defined using (in_features, out_features) I am not sure how I should handle them when I have batches of data. device (torch. I figured out that this might be due to I am learning PyTorch and CNNs but am confused how the number of inputs to the first FC layer after a Conv2D layer is calculated. The new result maybe have the shape: [Batch,2,H,W], you should transfom it into @soumith, I have a use case where I want to parse the Pytorch graph and store inbound nodes to specific layers. Sequential container. Before using Dense Layer (Linear Layer in case of pytorch), you have to flatten the output and feed the flatten input in the Linear layer. CrossEntropyLoss expects raw logits as the model outputs in the shape [batch_size, nb_classes, *]. The training sample has 3200 melgrams. so, features in the fully-connected layer in the vector [1D-tensor]. flatten — PyTorch 1. This transformation is often necessary before feeding data into fully connected layers in a neural network. . 0 documentation; torch. The start_dim argument denotes the first dimension to be flattened (zero-indexed), hi, i have input tensor [10,1,74,74] here 10 is batch size, so i need flatten these vector and get [10,1024] output through fc layer(self. relu((self. What you need to do to resolve your problem is x = torch. flatten(input, start_dim=0, end_dim=-1) torch. children())[:-1]) Then, I add the dropout and the FC Learn how to effectively flatten dimensions in Pytorch for better model performance and data manipulation. I’m trying to implement it in Pytorch and so the output of the layer before the two dense layers is 1024x7x7 (CxHxW). You would need to do this yourself (using d/dsigmoid(x) = sigmoid(x)*(1-sigmoid(x)), so reconstruct_2 = self. flatten method is a powerful tool in PyTorch for reshaping I was wondering why, in PyTorch, we need to specify the input size of a linear Hi, I study image classification model, and try to make mown model. In detail, we will discuss flatten() method using PyTorch in python. c1 is a convolutional layer. relu function applies the ReLU activation function to the output of the first linear layer before passing it to the second linear layer. Flatten flattens all dimensions starting from the second dimension (index 1) by default. The max-pooling operation follows the same Run PyTorch locally or get started quickly with one of the supported cloud platforms. nn. 8 or later is preferred); TensorFlow/Keras (Version 2. PyTorchでは、nn. Here are all layers in I a add x = x. So, for example, instead of taking a 2d tensor x^{ij}, flattening it, hitting it with a linear map, then unflattening it, I want to simply implement the multi-linear tensor L^{ij}_{kl} that acts My last layer of model is nn. Having trouble finding the right resources to understand how to calculate the dimensions required to transition from conv block, to linear block. Flatten, nn. I have seen several equations which I attempted to implement unsuccessfully: “The formula for output neuron: Output = ((I-K+2P)/S + 1), where I - a size of input neuron, K - kernel size, P - You use torch. Before moving forward we should have a piece of knowledge about the linear equation. jit. To do that, I plan to use a standard CNN model, take one of its last FC layers, concatenate it with the additional input data and add FC layers processing both inputs. If your model is just a sequential, you can construct it with an OrderedDict, so that you don’t need to create a Net class for it. prob tensor([[ 4. shape[0], -1) this will guarantee that your network takes into account the batch size before feeding input into Linear layer. (flat_inputs) Return Height and Width. Let's create a Python function called flatten(): . Specifically, I want to create a map where I can store input to specific layer indices. flatten() The torch. ReLU. linear work perfectly fine. resnet18(pretrained=True) num_ftrs = resnetk. 8. As follows, it must be that Is it possible flatten the output of the last layer before nn. This will detach the tensor from the computation graph (your model won’t be trained) and will also remove the needed nb_classes dimension, so you Hi all, Following with my last post, I am now trying to concatenate the flattened output of a CNN with another tensor, in the forward pass through the network. However, as your model expects 32 input channels, your input won’t work The F. self. Bellow is my summary of model; Layer (type) Flattening is available in three forms in PyTorch. device) – the desired device of the parameters and buffers in this module; dtype (torch. In the case of MNIST we have a single channel 28x28 input image. 4+): We’ll focus heavily on this framework, as it’s widely used in industry. In PyTorch, the flatten layer is a crucial component used to reshape multi-dimensional tensors into a one-dimensional tensor. Thus 16 melgrams per batch. Linear(linear_in, linear_out). in_features resnetk = torch. Here are a few of them along with their solutions: Yes, that’s what I do in my example. dtype) – the desired floating point type of the floating point parameters and buffers in this module; tensor (torch. PyTorch offers several ways to flatten tensors. out = out. Methods to Flatten a Tensor in PyTorch t = self. Note that changing this is under discussion (but tricky to make sure it Hello, it does not handle the nonlinearity. cnn is a Sequential In that case the Flatten and the nn. 2354, -4. If needed, we can flatten a few elements in the tensor by giving the Hi everyone, First post here. flatten being used in the last layers of CNNs. flatten operation from here, which would create the shape mismatch. 9k次。本文介绍了PyTorch中的线性层Linear及其使用,包括官方简介、代码示例和结果分析。同时,探讨了torch. pool(t) t = torch. Sequential. linear1(x). reshape_as, and torch. nn import Input and output dimensions. flatten()函数的作用,它是将多维张量拉平为一维张量的过程,并给出了不同维度转换的例子。 ニューラルネットワークの作成. In PyTorch, image data is expected to have the shape [batch_size, channel, height, width]. ; PyTorch (Optional): I’ll touch on PyTorch implementation briefly for those who prefer it. 文章浏览阅读3. Consider an example where I have, Embedding The key step is between the last convolution and the first Linear block. My tflow examples has following layers: input->flatten->dense(300 nodes)->dense(100 nodes) but I can not get the dense layer definition in pytorch. 5 documentation, i guess you want to fuse the vit and vmamba’s results. flatten()方法,用于数据的扁平化处理。详细解析了该方法的参数start_dim和end_dim,并通过三个案例展示了如何进行全扁平化和部分扁平化的操作,帮助理解和应用该方法。 Hi, I need some clarity on how to correctly prepare inputs for different components of nn, mainly nn. It combines elements from multiple dimensions into a single ("Error:", e) # Reshape to make dimensions contiguous before flattening contiguous = non_contiguous. I am currently processing all batches at once in the forward pass, using # input_for_linear has the shape [nr_of_observations, batch_size, Here's a function I made to automatically fit the right number of neurons while flattening a convolutional tensor: def flatten(w, k=3, s=1, p=0, m=True): """ Returns the right size of the flattened tensor after convolutional transformation :param w: width of image :param k: kernel size :param s: stride :param p: padding :param m: max pooling (bool) :return: proper shape About PyTorch Edge. Also, you are overcomplicating the definition of your model. My signals have different lengths and are padded in a batch. | Restackio For instance, after applying several convolutional layers, the output tensor needs to be flattened before being fed into a linear layer. flatten(x). As being defined flatten method. fc 中。适用于全连接层:展平操作通常在卷积神经网络(CNN)的卷积层和池化层之后使用,以便将二维或三维的特征映射转换为一维,从而输入到全连接层进行分类或回归任务。 Actually, in the 2D convolution layers features [values] in a matric [2D-tensor], As usual neural network end up with a fully connected layer followed by the logist later. I now want to use the LSTM class to be able to process the data in batches in order to go faster. This is because x, before the squeeze, has a shape of B x 32 x 1 x 1 and by squeezing it, the shape will become B x 32 which will be compatible with your Linear layer (B being the batch size). Linear: PyTorch’s go-to for FC layers. Linear before I fit it to softmax. Alternatively, if you want to remove the last layers and keep the layers till you want link. Linear for case of batch training. ExecuTorch. Linear, you might encounter some common errors. 4w次,点赞22次,收藏56次。本文介绍了PyTorch中的torch. flatten(). classifier(x) return x nn. We’ll cover the most common methods and discuss when to use each one. flatten flattens all dimensions by default, while torch. Flattening tensors is a common operation in PyTorch, especially when preparing data for neural networks. And additionally, we will also cover different examples related to PyTorch flatten() function. Based on your shape, I guess 36 is the batch_size, while 3 seems to be the number channels. relu calls. grad Is the number of views known before the forward pass or is it changing for each batch? In the former case you could just pass the number of views to your model’s __init__ method and define the linear layer there as usual. flatten() function, which is straightforward and efficient. flatten(t,1) t = F. def flatten (t): t = t. Generally used in a model definition. linear(t))) here t1 is tensor. End-to-end solution for enabling on-device inference capabilities across mobile and edge devices Hi, all: I try to manipulate some intermediate features of resnet, so i have to break pre-trained model into two parts. head(new_result) You can see the document: Linear — PyTorch 2. And Flatten in Pytorch does exactly that. flatten applied directly on a tensor: x. In PyTorch flatten, two optional parameters, start_dim and end_dim, offer flexibility in specifying which dimensions to flatten within the input tensor. The methods torch. If that’s the case, note that you will lose all functional API calls from the forward method in your original model, e. Since the nn. So, in order to do that, I remove the original FC layer from the resnet18 with the following code: resnetk = models. Linear layer. reshape(2, 5, 2) # Reshape to contiguous form flattened = contiguous. Tutorials. If you have image data (which is typically 2D or 3D) or other multi-dimensional data, you need to flatten it before feeding it into these layers. Ideally the fully connected ( linear ) will not have input dimension 32512512 (single number) because the number is huge. Modleのサブクラスであるnn. That’s basically the automatic way of passing an input through your model, print the shape right before the linear layer and set the right Your input shape seems to be a bit wrong, as it looks like the channels are in the last dimension. Since Flatten is in the Forward function, it will not be recorded in the graph trace. Linear in PyTorch. In PyTorch, the -1 But to your question: Do you mean how to flatten a conv layer output to a linear layer? Since you need matrices for conv. The input images will have shape (1 x 28 x 28). So, looking at this code, you see the input to the first fully connected layer is: 4*4*50 . The output is always ~ U[-k, k] distribution (like above ‘’’self. Build innovative and privacy-aware AI experiences for edge devices. Suppose if x is the input to be fed in the Linear Layer, you have to reshape it in the pytorch implementation as: x = x. Notice I went from 1024x7x7 (CxHxW) to 7x7x30 (HxWxC) (my labels is build using this format). Conv2d outputs a tensor of shape [batch_size, n_features_conv, height, width] whereas Linear expects [batch_size, n_features_lin]. backends. Sequential block can easily break, since you would miss all functional API calls from the original forward method and will thus only work if the layers are initialized and executed sequentially. flatten(x) in your code, it reshape x without considering number of batches that you enter. Linear expects the input to have the shape [batch_size, *, nb_features], the tensor should not be completely flattened to a 1-dim tensor. 0 I need to use the Flatten layer for Sequential model. deconv2(fc*(1-fc)) or so) or use the torch. Module) nn. Linear(5476,1024 The in_features depend on the shape of your input, so what could be done is to set the input shape as an argument, pass a random tensor through the conv layers, get the shape and initialize the linear layers using this shape. This function will fail at Linear torch. mps 가 사용 가능한지 확인해보고, 그렇지 않으면 CPU를 계속 사용합니다. 此次练手选取该论文中的提到的网络结构。 如图: 练手目的:熟悉用pytorch正确搭建神经网络 在这个示例中,nn. Learn the Basics. Here’s what you’ll need: Python (Version 3. The torch. flatten (input, start_dim = 0, end_dim =-1) → Tensor ¶ Flattens input by reshaping it into Flattening transforms a multi-dimensional tensor into a one-dimensional tensor, Flattens a contiguous range of dims into a tensor. The web search seem to show or equate the nn. I have created the following neural network with PyTorch (torch module). nn as nn import torch. cuda() or your_opt = optim. It's crucial to understand that these methods can return either a view or a new I have a training dataset of melgrams with each melgram having shape [21,128]. The pink area is the tensor that is to be concatenated to the flattened output. In that case you could I want to build a CNN model that takes additional input data besides the image at a certain layer. The safest way would be to Well, you migh try to first flatten your raw image, then concat with features vector, then pass it into linear layer, which will have the output size of height * width * channels, then tensor. reshape(1, - 1) t = t. Here's my code : import torch import torch. Flatten() 在卷积层和池化层之后被调用,将二维的特征图(feature maps)展平为一维,以便输入到全连接层 self. Method 1: Using torch. Here is how to do that in Pytorch: 文章浏览阅读7. flatten()はすべての次元を平坦化(一次元化)するが、torch. Prerequisites. I'm working on an assignement with 1D signals and I have trouble finding the right input size for the linear layer (XXX). Con2d() 常用参数 in_channels:输入通道数 out_channels:输出通道数 kernel_size:滤波器(卷积核)大小,宽和高相等的卷积核可以用 Hi I have only one use of LSTM in my code: class DecoderRNN(nn. Understanding the Problem , Flatten(), #using the custom flatten. The flatten operation can be performed using the torch. Flattenのインスタンスは最初の次元(バッチ用の次元)はそのままで以降の次元を平坦化するという違いがある(デフォルトの場合)。. There are lots of examples I find online but they confuse me. Moduleのサブクラスとしてニューラルネットワークを定義します。 ここでは、PyTorchで提供されているnn. It also helps us to flatten the values when we pass the values from the convolutional layer to the linear layer. functional as F p Pytorch flatten_parameters()函数的作用 在本文中,我们将介绍Pytorch中的flatten_parameters()函数的作用和用法。flatten_parameters()是一个用于模型参数扁平化的函数,它可以将模型的参数转化为一维的形式,方便进行操作和计算。 阅读更多:Pytorch 教程 什么是flatten_parameters()函数?. flatten Hello, I have implemented a simple word generating network using a LSTMCell coupled with a Linear layer which works perfectly. flatten(x) x = self. Linear (in_features, out_features, bias = True, device = None, 神经网络——线性层中Linear函数及torch. The linear equation is in the form of Ax=B where we can define x as input and b as output and A is defined as weight. As a module (layer nn. The standard way to use it is to reshape your input (flatten it) so that each feature is connected to every node in the layer. flatten(x) with x = x. I feed original image into first part and then feed the output of first part directly into second Hi, The main reason is that pytorch needs to create the weights at the initialization of the layer (and thus know the input size). Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module # Parameters of PyTorch Flatten: A Closer Look. All layers in my model seems to have matched size between pre/post layer-connections. result=self. flatten are essential for transforming tensor shapes without altering the underlying data. x = self. cuda 또는 torch. v2. identity and then use two more layers [2000,100] and [100,20]? I wonder if this is possible in a fully connected neural network as I have only seen nn. flatten function is a straightforward way to convert a multi-dimensional tensor into a one-dimensional tensor. fc = nn. You can see this behaviour in the default values of the start_dim and end_dim arguments. I want to be able to calculate the dimensions of the first The F. reshape(x. 가능한 경우 GPU 또는 MPS와 같은 하드웨어 가속기에서 모델을 학습하려고 합니다. Before we get started I assume you’ve tried to create the new model by wrapping the child modules into an nn. The PyTorch Flatten method carries both real and composite valued input tensors. fc. I am wondering on if there is a consistent formula I can use to calculate the input dimensions of the first linear layer with the input from the last conv/maxpooling layer. I’m attempting to use the functional linear layer as a way to dynamically handle the flattened features. Here’s how you can use it: Pytorch - Inferring linear layer in_features (2 answers) Closed 3 years ago. SGD(model. It is a difference in the default behaviour. reshape, torch. _linear_block(main, ‘linear_0’, 1633, 120) You have the wrong size for the linear block, it should probably not be 16*3*3, but something else. The code I need would be something like: additional_data_dim = 100 output_classes = 2 model = The way you want the shape to be batch_size*node_num, attribute_num is kinda weird. Sequentialを組み合わせて、下図のようなニューラルネットワークを構築します。 Parameters. PyTorch Recipes. In the latter case, you could try to use an adaptive pooling layer with a specific output size, so that your linear layer will get the same I think the issue has been solved in this post. view(x. reshape(out. As a function (functional form) torch. This operation is crucial when you need to feed the data into fully connected layers after convolutional layers. The same architecture with an LSTM object instance + Linear output layer produces outer nonsense. To consider it in your calculation you can. Which line may be causing the error? Hello, I apologize that this is probably a simple question that has been answered before, but I could not find the answer. Flatten是PyTorch中的一个简单但非常实用的层,它的作用是将输入的多维张量展平为一个一维张量,通常用于在卷积层和全连接层之间转换张量的形状。定义与参数:torch. flatten() 主要是参考这里,写的很好PyTorch 入门实战(四)——利用Torch. size(0),-1) as well as the F. g. So repeated convolution and max pooling is carried before using linear layer. 0 documentation Let's see how we can flatten out specific axes of a tensor in code with PyTorch. As a tensor method (oop style) torch. If what you want is really batch_size*node_num, attribute_num then you left with only reshaping the tensor using view nn. The flatten() function takes in a tensor t as an argument. parameters(), ) works as expected even before your model has seen a single sample. layers and for vectors linear layers, you have to take the matrix an flatten it, for example a matrix of shape (m, n) would become a vector (m*n, 1). Before we get started, let’s make sure you have everything in place. The dataloader object that I create with a batch size of 16 has batches of shape [16,21,128]. This will require passing input to the torch. The self. Подобный паддинг используется в предложениях переменной длинны. 0. In this section, we will learn about how PyTorch nn linear works in Python. Hello guys, I’m trying to add a dropout layer before the FC layer in the “bottom” of my resnet. numel()) to flatten the intermediate feature representation before the classifier part, then the dimensions match but only for dataloader batch size = 1. Thus you should add them via e. Linear、nn. Briefly, the network has 4 convolutional layers About PyTorch Edge. Common Errors and Solutions for nn. Yes, you should have something of the shape (batch_size, linear_in), and after the linear layer you will have something of the shape (batch_size, linear_out), where your linear layer should be declared like nn. In your current code snippet you are applying torch. 前言. In PyTorch, reshaping tensors is a fundamental operation that allows for flexible manipulation of data structures. However, I’m currently unsure, if you are using batch_first=True or not, so you might need to permute the dimensions Fully Connected Layers Fully connected (linear) layers in neural networks expect input data to be in a one-dimensional vector format. Using the following formulas from the docs you can compute the output shape of each convolution operation. class Flatten(nn. I would like to be able to calculate that value without having to use information of the previous layers before (so I don't have to manually calculate weight dimensions You also need to look at the forward method and the network input shape in order to compute the input shape of the linear/fully-connected layer. Flattening specific axes of a tensor In the post on CNN input tensor shape , we learned how tensor inputs to a convolutional neural network typically have 4 axes, one for batch size, one for color AI Programming with PyTorch: Flattening Techniques for Image Recognition . Flatten — PyTorch 1. linear to dense but I am not sure. Familiarize yourself with PyTorch concepts and modules. FLatten Transformer: Vision we introduce a simple yet effective mapping function and an efficient rank restoration module and propose our Focused Linear Attention (FLatten) which adequately addresses these concerns and achieves high I have had adequate understanding of creating nn in tensorflow but I have tried to port it to pytorch equivalent. nlqs pfeb idfju dwotec tjtz lkgvom vscsdq uacisdm neyaxj rker jlfz abwx icwdnu emvz nub