Top level module for Sequel
There are some module methods that are added via metaprogramming, one for each supported adapter. For example:
DB = Sequel.sqlite # Memory database DB = Sequel.sqlite('blog.db') DB = Sequel.postgres('database_name', :user=>'user', :password=>'password', :host=>'host', :port=>5432, :max_connections=>10)
If a block is given to these methods, it is passed the opened Database object, which is closed (disconnected) when the block exits, just like a block passed to connect. For example:
Sequel.sqlite('blog.db'){|db| puts db[:users].count}
Sequel currently adds methods to the Array, Hash, String and Symbol classes
by default. You can either require 'sequel/no_core_ext' or set the
SEQUEL_NO_CORE_EXTENSIONS
constant or environment variable
before requiring sequel to have # Sequel not add
methods to those classes.
For a more expanded introduction, see the README. For a quicker introduction, see the cheat sheet.
This extension adds a statement cache to Sequel's postgres adapter, with the ability to automatically prepare statements that are executed repeatedly. When combined with the pg_auto_parameterize extension, it can take Sequel code such as:
DB.extend Sequel::Postgres::AutoParameterize::DatabaseMethods DB.extend Sequel::Postgres::StatementCache::DatabaseMethods DB[:table].filter(:a=>1) DB[:table].filter(:a=>2) DB[:table].filter(:a=>3)
And use the same prepared statement to execute the queries.
The backbone of this extension is a modified LRU cache. It considers both the last executed time and the number of executions when determining which queries to keep in the cache. It only cleans the cache when a high water mark has been passed, and removes queries until it reaches the low water mark, in order to avoid thrashing when you are using more than the maximum number of queries. To avoid preparing queries when it isn't necessary, it does not prepare them on the server side unless they are being executed more than once. The cache is very tunable, allowing you to set the high and low water marks, the number of executions before preparing the query, and even use a custom callback for determine which queries to keep in the cache.
Note that automatically preparing statements does have some issues. Most notably, if you change the result type that the query returns, PostgreSQL will raise an error. This can happen if you have prepared a statement that selects all columns from a table, and then you add or remove a column from that table. This extension does attempt to check that case and clear the statement caches if you use alter_table from within Sequel, but it cannot fix the case when such a change is made externally.
This extension only works when the pg driver is used as the backend for the postgres adapter.
The schema_dumper extension supports dumping tables and indexes in a Sequel::Migration format, so they can be restored on another database (which can be the same type or a different type than the current database). The main interface is through Sequel::Database#dump_schema_migration.
The schema_caching extension adds a few methods to Sequel::Database that make it easy to dump the parsed schema information to a file, and load it from that file. Loading the schema information from a dumped file is faster than parsing it from the database, so this can save bootup time for applications with large numbers of models.
Basic usage in application code:
Sequel.extension :schema_caching DB = Sequel.connect('...') DB.load_schema_cache('/path/to/schema.dump') # load model files
Then, whenever the database schema is modified, write a new cached file.
You can do that with bin/sequel
's -S option:
bin/sequel -S /path/to/schema.dump postgres://...
Alternatively, if you don't want to dump the schema information for all tables, and you don't worry about race conditions, you can choose to use the following in your application code:
Sequel.extension :schema_caching DB = Sequel.connect('...') DB.load_schema_cache?('/path/to/schema.dump') # load model files DB.dump_schema_cache?('/path/to/schema.dump')
With this method, you just have to delete the schema dump file if the schema is modified, and the application will recreate it for you using just the tables that your models use.
Note that it is up to the application to ensure that the dumped cached schema reflects the current state of the database. Sequel does no checking to ensure this, as checking would take time and the purpose of this code is to take a shortcut.
The cached schema is dumped in Marshal format, since it is the fastest and it handles all ruby objects used in the schema hash. Because of this, you should not attempt to load the schema from a untrusted file.
The query extension adds Sequel::Dataset#query which allows a different way to construct queries instead of the usual method chaining.
The server_block extension adds the Database#with_server method, which takes a shard argument and a block, and makes it so that access inside the block will use the specified shard by default.
First, you need to enable it on the database object:
Sequel.extension :server_block DB.extend Sequel::ServerBlock
Then you can call with_server:
DB.with_server(:shard1) do DB[:a].all # Uses shard1 DB[:a].server(:shard2).all # Uses shard2 end DB[:a].all # Uses default
You can even nest calls to with_server:
DB.with_server(:shard1) do DB[:a].all # Uses shard1 DB.with_server(:shard2) do DB[:a].all # Uses shard2 end DB[:a].all # Uses shard1 end DB[:a].all # Uses default
Note this this extension assumes the following shard names should use the server/shard passed to with_server: :default, nil, :read_only. All other shard names will cause the standard behavior to be used.
The pg_array_ops extension adds support to Sequel's DSL to make it easier to call PostgreSQL array functions and operators. The most common usage is taking an object that represents an SQL identifier (such as a :symbol), and calling pg_array on it:
ia = :int_array_column.pg_array
This creates a Sequel::Postgres::ArrayOp object that can be used for easier querying:
ia[1] # int_array_column[1] ia[1][2] # int_array_column[1][2] ia.contains(:other_int_array_column) # @> ia.contained_by(:other_int_array_column) # <@ ia.overlaps(:other_int_array_column) # && ia.concat(:other_int_array_column) # || ia.push(1) # int_array_column || 1 ia.unshift(1) # 1 || int_array_column ia.any # ANY(int_array_column) ia.all # ALL(int_array_column) ia.dims # array_dims(int_array_column) ia.length # array_length(int_array_column, 1) ia.length(2) # array_length(int_array_column, 2) ia.lower # array_lower(int_array_column, 1) ia.lower(2) # array_lower(int_array_column, 2) ia.join # array_to_string(int_array_column, '', NULL) ia.join(':') # array_to_string(int_array_column, ':', NULL) ia.join(':', ' ') # array_to_string(int_array_column, ':', ' ') ia.unnest # unnest(int_array_column)
See the PostgreSQL array function and operator documentation for more details on what these functions and operators do.
If you are also using the pg_array extension, you should load it before loading this extension. Doing so will allow you to use PGArray#op to get an ArrayOp, allowing you to perform array operations on array literals.
The columns_introspection extension attempts to introspect the selected columns for a dataset before issuing a query. If it thinks it can guess correctly at the columns the query will use, it will return the columns without issuing a database query. This method is not fool-proof, it's possible that some databases will use column names that Sequel does not expect.
To enable this for a single dataset, extend the dataset with Sequel::ColumnIntrospection. To enable this for all datasets, run:
Sequel::Dataset.introspect_all_columns
This adds a Sequel::Dataset#to_dot
method. The
to_dot
method returns a string that can be processed by
graphviz's dot
program in order to get a visualization of the
dataset. Basically, it shows a version of the dataset's abstract syntax
tree.
The pretty_table extension adds Sequel::Dataset#print and the Sequel::PrettyTable class for creating nice-looking plain-text tables.
The arbitrary_servers extension allows you to connect to arbitrary servers/shards that were not defined when you created the database. To use it, you first extend the Database's connection pool with the Sequel::ArbitraryServers module:
Sequel.extension :arbitrary_servers DB.pool.extend Sequel::ArbitraryServers
Then you can pass arbitrary connection options for the server/shard to use as a hash:
DB[:table].server(:host=>'...', :database=>'...').all
Because Sequel can never be sure that the connection will be reused, arbitrary connections are disconnected as soon as the outermost block that uses them exits. So this example uses the same connection:
DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c| DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c2| # c == c2 end end
But this example does not:
DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c| end DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c2| # c != c2 end
You can use this extension in conjunction with the server_block extension:
DB.with_server(:host=>'...', :database=>'...') do DB.synchronize do # All of these use the host/database given to with_server DB[:table].insert(...) DB[:table].update(...) DB.tables DB[:table].all end end
Anyone using this extension in conjunction with the server_block extension may want to do the following to so that you don't need to call synchronize separately:
def DB.with_server(*) super{synchronize{yield}} end
Note that this extension only works with the sharded threaded connection pool. If you are using the sharded single connection pool, you need to switch to the sharded threaded connection pool before using this extension.
The null_dataset extension adds the Sequel::Dataset#nullify method, which returns a cloned dataset that will never issue a query to the database. It implements the null object pattern for datasets.
The most common usage is probably in a method that must return a dataset, where the method knows the dataset shouldn't return anything. With standard Sequel, you'd probably just add a WHERE condition that is always false, but that still results in a query being sent to the database, and can be overridden using unfiltered, the OR operator, or a UNION.
Usage:
ds = DB[:items].nullify.where(:a=>:b).select(:c) ds.sql # => "SELECT c FROM items WHERE (a = b)" ds.all # => [] # no query sent to the database
Note that there is one case where a null dataset will sent a query to the database. If you call columns on a nulled dataset and the dataset doesn't have an already cached version of the columns, it will create a new dataset with the same options to get the columns.
The select_remove extension adds Sequel::Dataset#select_remove for removing existing selected columns from a dataset. It's not part of Sequel core as it is rarely needed and has some corner cases where it can't work correctly.
The thread_local_timezones extension allows you to set a per-thread timezone that will override the default global timezone while the thread is executing. The main use case is for web applications that execute each request in its own thread, and want to set the timezones based on the request. The most common example is having the database always store time in UTC, but have the application deal with the timezone of the current user. That can be done with:
Sequel.database_timezone = :utc # In each thread: Sequel.thread_application_timezone = current_user.timezone
This extension is designed to work with the named_timezones extension.
This extension adds the thread_application_timezone=, thread_database_timezone=, and thread_typecast_timezone= methods to the Sequel module. It overrides the application_timezone, database_timezone, and typecast_timezone methods to check the related thread local timezone first, and use it if present. If the related thread local timezone is not present, it falls back to the default global timezone.
There is one special case of note. If you have a default global timezone and you want to have a nil thread local timezone, you have to set the thread local value to :nil instead of nil:
Sequel.application_timezone = :utc Sequel.thread_application_timezone = nil Sequel.application_timezone # => :utc Sequel.thread_application_timezone = :nil Sequel.application_timezone # => nil
The pg_hstore_ops extension adds support to Sequel's DSL to make it easier to call PostgreSQL hstore functions and operators. The most common usage is taking an object that represents an SQL expression (such as a :symbol), and calling hstore on it:
h = :hstore_column.hstore
This creates a Sequel::Postgres::HStoreOp object that can be used for easier querying:
h - 'a' # hstore_column - 'a' h['a'] # hstore_column -> 'a' h.concat(:other_hstore_column) # || h.has_key?('a') # ? h.contain_all(:array_column) # ?& h.contain_any(:array_column) # ?| h.contains(:other_hstore_column) # @> h.contained_by(:other_hstore_column) # <@ h.defined # defined(hstore_column) h.delete('a') # delete(hstore_column, 'a') h.each # each(hstore_column) h.keys # akeys(hstore_column) h.populate(:a) # populate_record(a, hstore_column) h.record_set(:a) # (a #= hstore_column) h.skeys # skeys(hstore_column) h.slice(:a) # slice(hstore_column, a) h.svals # svals(hstore_column) h.to_array # hstore_to_array(hstore_column) h.to_matrix # hstore_to_matrix(hstore_column) h.values # avals(hstore_column)
See the PostgreSQL hstore function and operator documentation for more details on what these functions and operators do.
If you are also using the pg_hstore extension, you should load it before loading this extension. Doing so will allow you to use HStore#op to get an HStoreOp, allowing you to perform hstore operations on hstore literals.
The pretty_table extension adds Sequel::Dataset#print and the Sequel::PrettyTable class for creating nice-looking plain-text tables.
The LooserTypecasting extension changes the float and integer typecasting to use the looser .to_f and .to_i instead of the more strict Kernel.Float and Kernel.Integer. To use it, you should extend the database with the Sequel::LooserTypecasting module after loading the extension:
Sequel.extension :looser_typecasting DB.extend(Sequel::LooserTypecasting)
The pagination extension adds the Sequel::Dataset#paginate and each_page methods, which return paginated (limited and offset) datasets with some helpful methods that make creating a paginated display easier.
This extension allows Sequel's postgres adapter to automatically parameterize all common queries. Sequel's default behavior has always been to literalize all arguments unless specifically using parameters (via :$arg placeholders and the prepare/call methods). This extension makes Sequel take all string, numeric, date, and time types and automatically turn them into parameters. Example:
# Default DB[:test].where(:a=>1) # SQL: SELECT * FROM test WHERE a = 1 DB.extend Sequel::Postgres::AutoParameterize::DatabaseMethods DB[:test].where(:a=>1) # SQL: SELECT * FROM test WHERE a = $1 (args: [1])
This extension is not necessarily faster or more safe than the default behavior. In some cases it is faster, such as when using large strings. However, there are also some known issues with this approach:
Because of the way it operates, it has no context to make a determination about whether to literalize an object or not. For example, if it comes across an integer, it will turn it into a parameter. That breaks code such as:
DB[:table].select(:a, :b).order(2, 1)
Since it will use the following SQL (which isn't valid):
SELECT a, b FROM table ORDER BY $1, $2
To work around this, you can either specify the columns manually or use a literal string:
DB[:table].select(:a, :b).order(:b, :a) DB[:table].select(:a, :b).order('2, 1'.lit)
In order to avoid many type errors, it attempts to guess the appropriate type and automatically casts all placeholders. Unfortunately, if the type guess is incorrect, the query will be rejected. For example, the following works without automatic parameterization, but fails with it:
DB[:table].insert(:interval=>'1 day')
To work around this, you can just add the necessary casts manually:
DB[:table].insert(:interval=>'1 day'.cast(:interval))
You can also work around any issues that come up by disabling automatic parameterization by calling the no_auto_parameterize method on the dataset (which returns a clone of the dataset).
It is likely there are other corner cases I am not yet aware of when using this extension, so use this extension with caution.
This extension is only compatible when using the pg driver, not when using the old postgres driver or the postgres-pr driver.
The query_literals extension changes Sequel's default behavior of the select, order and group methods so that if the first argument is a regular string, it is treated as a literal string, with the rest of the arguments (if any) treated as placeholder values. This allows you to write code such as:
DB[:table].select('a, b, ?', 2).group('a, b').order('c')
The default Sequel behavior would literalize that as:
SELECT 'a, b, ?', 2 FROM table GROUP BY 'a, b' ORDER BY 'c'
Using this extension changes the literalization to:
SELECT a, b, 2, FROM table GROUP BY a, b ORDER BY c
This extension makes select, group, and order methods operate like filter methods, which support the same interface.
There are very few places where Sequel's default behavior is desirable in this area, but for backwards compatibility, the defaults won't be changed until the next major release.
Loading this extension does nothing by default except make the Sequel::QueryLiterals module available. You can extend specific datasets with this module:
ds = DB[:table] ds.extend(Sequel::QueryLiterals)
Order you can extend all of a database's datasets with it, which is probably the desired behavior if you are using this extension:
DB.extend_datasets(Sequel::QueryLiterals)
Hash of adapters that have been used. The key is the adapter scheme symbol, and the value is the Database subclass.
Deprecated alias for HookFailed, kept for backwards compatibility
Array of all databases to which Sequel has connected. If you are developing an application that can connect to an arbitrary number of databases, delete the database objects from this or they will not get garbage collected.
Proc that is instance evaled to create the default inflections for both the model inflector and the inflector extension.
The major version of Sequel. Only bumped for major changes.
The minor version of Sequel. Bumped for every non-patch level release, generally around once a month.
The tiny version of Sequel. Usually 0, only bumped for bugfix releases that fix regressions from previous versions.
The version of Sequel you are using, as a string (e.g. "2.11.0")
Sequel converts two digit years in
Date
s and DateTime
s by default, so 01/02/03 is
interpreted at January 2nd, 2003, and 12/13/99 is interpreted as December
13, 1999. You can override this to treat those dates as January 2nd, 0003
and December 13, 0099, respectively, by:
Sequel.convert_two_digit_years = false
Sequel can use either Time
or
DateTime
for times returned from the database. It defaults to
Time
. To change it to DateTime
:
Sequel.datetime_class = DateTime
For ruby versions less than 1.9.2, Time
has a limited range
(1901 to 2038), so if you use datetimes out of that range, you need to
switch to DateTime
. Also, before 1.9.2, Time
can
only handle local and UTC times, not other timezones. Note that
Time
and DateTime
objects have a different API,
and in cases where they implement the same methods, they often implement
them differently (e.g. + using seconds on Time
and days on
DateTime
).
Sets whether or not to attempt to handle NULL values correctly when given an empty array. By default:
DB[:a].filter(:b=>[]) # SELECT * FROM a WHERE (b != b) DB[:a].exclude(:b=>[]) # SELECT * FROM a WHERE (b = b)
However, some databases (e.g. MySQL) will perform very poorly with this type of query. You can set this to false to get the following behavior:
DB[:a].filter(:b=>[]) # SELECT * FROM a WHERE 1 = 0 DB[:a].exclude(:b=>[]) # SELECT * FROM a WHERE 1 = 1
This may not handle NULLs correctly, but can be much faster on some databases.
For backwards compatibility, has no effect.
Lets you create a Model subclass with its
dataset already set. source
should be an instance of one of
the following classes:
Sets the database for this model to source
. Generally only
useful when subclassing directly from the returned class, where the name of
the subclass sets the table name (which is combined with the
Database
in source
to create the dataset to use)
Sets the dataset for this model to source
.
Sets the table name for this model to source
. The class will
use the default database for model classes in order to create the dataset.
The purpose of this method is to set the dataset/database automatically for a model class, if the table name doesn't match the implicit name. This is neater than using set_dataset inside the class, doesn't require a bogus query for the schema.
# Using a symbol class Comment < Sequel::Model(:something) table_name # => :something end # Using a dataset class Comment < Sequel::Model(DB1[:something]) dataset # => DB1[:something] end # Using a database class Comment < Sequel::Model(DB1) dataset # => DB1[:comments] end
# File lib/sequel/model.rb, line 37 def self.Model(source) if Sequel::Model.cache_anonymous_models && (klass = Sequel.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source]}) return klass end klass = if source.is_a?(Database) c = Class.new(Model) c.db = source c else Class.new(Model).set_dataset(source) end Sequel.synchronize{Model::ANONYMOUS_MODEL_CLASSES[source] = klass} if Sequel::Model.cache_anonymous_models klass end
Returns true if the passed object could be a specifier of conditions, false otherwise. Currently, Sequel considers hashes and arrays of two element arrays as condition specifiers.
Sequel.condition_specifier?({}) # => true Sequel.condition_specifier?([[1, 2]]) # => true Sequel.condition_specifier?([]) # => false Sequel.condition_specifier?([1]) # => false Sequel.condition_specifier?(1) # => false
# File lib/sequel/core.rb, line 117 def self.condition_specifier?(obj) case obj when Hash true when Array !obj.empty? && !obj.is_a?(SQL::ValueList) && obj.all?{|i| (Array === i) && (i.length == 2)} else false end end
Creates a new database object based on the supplied connection string and optional arguments. The specified scheme determines the database class used, and the rest of the string specifies the connection options. For example:
DB = Sequel.connect('sqlite:/') # Memory database DB = Sequel.connect('sqlite://blog.db') # ./blog.db DB = Sequel.connect('sqlite:///blog.db') # /blog.db DB = Sequel.connect('postgres://user:password@host:port/database_name') DB = Sequel.connect('sqlite:///blog.db', :max_connections=>10)
If a block is given, it is passed the opened Database
object,
which is closed when the block exits. For example:
Sequel.connect('sqlite://blog.db'){|db| puts db[:users].count}
For details, see the "Connecting to a Database" guide. To set up a master/slave or sharded database connection, see the "Master/Slave Databases and Sharding" guide.
# File lib/sequel/core.rb, line 146 def self.connect(*args, &block) Database.connect(*args, &block) end
Convert the exception
to the given class. The given class
should be Sequel::Error
or a subclass. Returns an instance of
klass
with the message and backtrace of
exception
.
# File lib/sequel/core.rb, line 171 def self.convert_exception_class(exception, klass) return exception if exception.is_a?(klass) e = klass.new("#{exception.class}: #{exception.message}") e.wrapped_exception = exception e.set_backtrace(exception.backtrace) e end
Whether the core extensions are enabled. The core extensions are enabled by default for backwards compatibility, but can be disabled using the SEQUEL_NO_CORE_EXTENSIONS constant or environment variable.
# File lib/sequel/core.rb, line 154 def self.core_extensions? # We override this method to return true inside the core_extensions.rb file, # but we also set it here because that file is not loaded until most of Sequel # is finished loading, and parts of Sequel check whether the core extensions # are loaded. true end
Load all Sequel extensions given. Extensions are
just files that exist under sequel/extensions
in the load
path, and are just required. Generally, extensions modify the behavior of
Database
and/or Dataset
, but Sequel ships with some extensions that modify other
classes that exist for backwards compatibility. In some cases, requiring an
extension modifies classes directly, and in others, it just loads a module
that you can extend other classes with. Consult the documentation for each
extension you plan on using for usage.
Sequel.extension(:schema_dumper) Sequel.extension(:pagination, :query)
# File lib/sequel/core.rb, line 189 def self.extension(*extensions) extensions.each{|e| tsk_require "sequel/extensions/#{e}"} end
Set the method to call on identifiers going into the database. This affects the literalization of identifiers by calling this method on them before they are input. Sequel upcases identifiers in all SQL strings for most databases, so to turn that off:
Sequel.identifier_input_method = nil
to downcase instead:
Sequel.identifier_input_method = :downcase
Other String instance methods work as well.
# File lib/sequel/core.rb, line 204 def self.identifier_input_method=(value) Database.identifier_input_method = value end
Set the method to call on identifiers coming out of the database. This affects the literalization of identifiers by calling this method on them when they are retrieved from the database. Sequel downcases identifiers retrieved for most databases, so to turn that off:
Sequel.identifier_output_method = nil
to upcase instead:
Sequel.identifier_output_method = :upcase
Other String instance methods work as well.
# File lib/sequel/core.rb, line 220 def self.identifier_output_method=(value) Database.identifier_output_method = value end
Yield the Inflections module if a block is given, and return the Inflections module.
# File lib/sequel/model/inflections.rb, line 4 def self.inflections yield Inflections if block_given? Inflections end
Alias to the standard version of require
The preferred method for writing Sequel migrations, using a DSL:
Sequel.migration do up do create_table(:artists) do primary_key :id String :name end end down do drop_table(:artists) end end
Designed to be used with the Migrator
class, part of the
migration
extension.
# File lib/sequel/extensions/migration.rb, line 269 def self.migration(&block) MigrationDSL.create(&block) end
Require all given files
which should be in the same or a
subdirectory of this file. If a subdir
is given, assume all
files
are in that subdir. This is used to ensure that the
files loaded are from the same version of Sequel
as this file.
# File lib/sequel/core.rb, line 236 def self.require(files, subdir=nil) Array(files).each{|f| super("#{File.dirname(__FILE__).untaint}/#{"#{subdir}/" if subdir}#{f}")} end
Set whether Sequel is being used in single threaded mode. By default, Sequel uses a thread-safe connection pool, which isn't as fast as the single threaded connection pool, and also has some additional thread safety checks. If your program will only have one thread, and speed is a priority, you should set this to true:
Sequel.single_threaded = true
# File lib/sequel/core.rb, line 247 def self.single_threaded=(value) @single_threaded = value Database.single_threaded = value end
Converts the given string
into a Date
object.
Sequel.string_to_date('2010-09-10') # Date.civil(2010, 09, 10)
# File lib/sequel/core.rb, line 255 def self.string_to_date(string) begin Date.parse(string, Sequel.convert_two_digit_years) rescue => e raise convert_exception_class(e, InvalidValue) end end
Converts the given string
into a Time
or
DateTime
object, depending on the value of
Sequel.datetime_class
.
Sequel.string_to_datetime('2010-09-10 10:20:30') # Time.local(2010, 09, 10, 10, 20, 30)
# File lib/sequel/core.rb, line 267 def self.string_to_datetime(string) begin if datetime_class == DateTime DateTime.parse(string, convert_two_digit_years) else datetime_class.parse(string) end rescue => e raise convert_exception_class(e, InvalidValue) end end
Converts the given string
into a Sequel::SQLTime
object.
v = Sequel.string_to_time('10:20:30') # Sequel::SQLTime.parse('10:20:30') DB.literal(v) # => '10:20:30'
# File lib/sequel/core.rb, line 283 def self.string_to_time(string) begin SQLTime.parse(string) rescue => e raise convert_exception_class(e, InvalidValue) end end
Unless in single threaded mode, protects access to any mutable global data structure in Sequel. Uses a non-reentrant mutex, so calling code should be careful.
# File lib/sequel/core.rb, line 298 def self.synchronize(&block) @single_threaded ? yield : @data_mutex.synchronize(&block) end
Uses a transaction on all given databases with the given options. This:
Sequel.transaction([DB1, DB2, DB3]){...}
is equivalent to:
DB1.transaction do DB2.transaction do DB3.transaction do ... end end end
except that if Sequel::Rollback is raised by the block, the transaction is rolled back on all databases instead of just the last one.
Note that this method cannot guarantee that all databases will commit or rollback. For example, if DB3 commits but attempting to commit on DB2 fails (maybe because foreign key checks are deferred), there is no way to uncommit the changes on DB3. For that kind of support, you need to have two-phase commit/prepared transactions (which Sequel supports on some databases).
# File lib/sequel/core.rb, line 332 def self.transaction(dbs, opts={}, &block) unless opts[:rollback] rescue_rollback = true opts = opts.merge(:rollback=>:reraise) end pr = dbs.reverse.inject(block){|bl, db| proc{db.transaction(opts, &bl)}} if rescue_rollback begin pr.call rescue Sequel::Rollback => e nil end else pr.call end end
Same as ::require, but wrapped in a mutex in order to be thread safe.
# File lib/sequel/core.rb, line 350 def self.ts_require(*args) check_requiring_thread{require(*args)} end
Same as Kernel.require, but wrapped in a mutex in order to be thread safe.
# File lib/sequel/core.rb, line 355 def self.tsk_require(*args) check_requiring_thread{k_require(*args)} end
The version of Sequel you are using, as a string (e.g. "2.11.0")
# File lib/sequel/version.rb, line 15 def self.version VERSION end
If the supplied block takes a single argument, yield a new
SQL::VirtualRow
instance to the block argument. Otherwise,
evaluate the block in the context of a new SQL::VirtualRow
instance.
Sequel.virtual_row{a} # Sequel::SQL::Identifier.new(:a) Sequel.virtual_row{|o| o.a{}} # Sequel::SQL::Function.new(:a)
# File lib/sequel/core.rb, line 366 def self.virtual_row(&block) vr = SQL::VirtualRow.new case block.arity when -1, 0 vr.instance_eval(&block) else block.call(vr) end end